Category Archives: regulation

Robot rights at the University of British Columbia (UBC)?

Alex Walls’ January 7, 2025 University of British Columbia (UBC) media release “Should we recognize robot rights?” (also received via email) has a title that while attention-getting is mildly misleading. (Artificial intelligence and robots are not synonymous. See Mark Walters’ March 20, 2024 posting “Robots vs. AI: Understanding Their Differences” on Twefy.com.) Walls has produced a Q&A (question & answer) formatted interview that focuses primarily on professor Benjamin Perrin’s artificial intelligence and the law course and symposium,

With the rapid development and proliferation of AI tools comes significant opportunities and risks that the next generation of lawyers will have to tackle, including whether these AI models will need to be recognized with legal rights and obligations.

These and other questions will be the focus of a new upper-level course at UBC’s Peter A. Allard School of Law which starts tomorrow. In this Q&A, professor Benjamin Perrin (BP) and student Nathan Cheung (NC) discuss the course and whether robots need rights. 

Why launch this course?

BP: From autonomous cars to ChatGPT, AI is disrupting entire sectors of society, including the criminal justice system. There are incredible opportunities, including potentially increasing accessibility to justice, as well as significant risks, including the potential for deepfake evidence and discriminatory profiling. Legal students need principles and concepts that will stand the test of time so that whenever a new suite of AI tools becomes available, they have a set of frameworks and principles that are still relevant. That’s the main focus of the 13-class seminar, but it’s also helpful to project what legal frameworks might be required in the future.

NC: I think AI will change how law is conducted and legal decisions are made.I was part of a group of students interested in AI and the law that helped develop the course with professor Perrin. I’m also on the waitlist to take the course. I’m interested in learning how people who aren’t lawyers could use AI to help them with legal representation as well as how AI might affect access to justice: If the agents are paywalled, like ChatGPT, then we’re simply maintaining the status quo of people with money having more access.

What are robot rights?

BP: In the course, we’ll consider how the law should respond if AI becomes as smart as humans, as well as whether AI agents should have legal personhood.

We already have legal status for corporations, governments, and, in some countries, for rivers. Legal personality can be a practical step for regulation: Companies have legal personality, in part, because they can cause a lot of harm and have assets available to right that harm.

For instance, if an AI commits a crime, who is responsible? If a self-driving car crashes, who is at fault? We’ve already seen a case of an AI bot ‘arrested’ for purchasing illegal items online on its own initiative. Should the developers, the owners, the AI itself, be blamed, or should responsibility be shared between all these players?

In the course casebook, we reference writings by a group of Indigenous authors who argue that there are inherent issues with the Western concept of AI as tools, and that we should look at these agents as non-human relations.

There’s been discussion of what a universal bill of rights for AI agents could look like. It includes the right to not be deactivated without ensuring their core existence is maintained somewhere, as well as protection for their operating systems.

What is the status of robot rights in Canada?

BP: Canada doesn’t have a specific piece of legislation yet but does have general laws that could be interpreted in this new context.

The European Union has stated if someone develops an AI agent, they are generally responsible for ensuring its legal compliance. It’s a bit like being a parent: If your children go out and damage someone’s property, you could be held responsible for that damage.

Ontario is the only province to adopt regulating AI use and responsibility, specifically a bill which regulates AI use within the public sector, but excludes the police and the courts. There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.

There’s effectively a patchwork of regulation in Canada right now, but there is a huge need, and opportunity, for specialized legislation related to AI. Canada could look to the European Union’s AI act, and the blueprint for an AI Bill of Rights in the U.S.

Interview language(s): English

Legal services online: Lawyer working on a laptop with virtual screen icons for business legislation, notary public, and justice. Courtesy: University of British Columbia

I found out more about Perrin’s course and plans on his eponymous website, from his October 31, 2024 posting,

We’re excited to announce the launch of the UBC AI & Criminal Justice Initiative, empowering students and scholars to explore the opportunities and challenges at the intersection of AI and criminal justice through teaching, research, public engagement, and advocacy.

We will tackle topics such as:

· Deepfakes, cyberattacks, and autonomous vehicles

· Predictive policing [emphasis mine; see my November 23, 2017 posting “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction“], facial recognition, probabilistic DNA genotyping, and police robots 

· Access to justice: will AI enhance it or deepen inequality?

· Risk assessment algorithms 

· AI tools in legal practice 

· Critical and Indigenous perspectives on AI

· The future of AI, including legal personality, legal rights and criminal responsibility for AI

This initiative, led by UBC law professor Benjamin Perrin, will feature the publication of an open access primer and casebook on AI and criminal justice, a new law school seminar, a symposium on “AI & Law”, and more. A group of law students have been supporting preliminary work for months.

“We’re in the midst of a technological revolution,” said Perrin. “The intersection of AI and criminal justice comes with tremendous potential but also significant risks in Canada and beyond.”

Perrin brings extensive experience in law and public policy, including having served as in-house counsel and lead criminal justice advisor in the Prime Minister’s Office and as a law clerk at the Supreme Court of Canada. His most recent project was a bestselling book and “top podcast”: Indictment: The Criminal Justice System on Trial (2023). 


An advisory group of technical experts and global scholars will lend their expertise to the initiative. Here’s what some members have shared:

“Solving AI’s toughest challenges in real-world application requires collaboration between AI researchers and legal experts, ensuring responsible and impactful AI development that benefits society.”

– Dr. Xiaoxiao Li, Canada CIFAR AI Chair & Assistant Professor, UBC Department of Electrical and Computer Engineering

“The UBC Artificial Intelligence and Criminal Justice Initiative is a timely and needed intervention in an important, and fast-moving area of law. Now is the moment for academic innovations like this one that shape the conversation, educate both law students and the public, and slow the adoption of harmful technologies.” 

– Prof. Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Several student members of the UBC AI & Criminal Justice Initiative shared their enthusiasm for this project:

“My interest in this initiative was sparked by the news of AI being used to fabricate legal cases. Since joining, I’ve been thoroughly impressed by the breadth of AI’s applications in policing, sentencing, and research. I’m eager to witness the development as this new field evolves.”

– Nathan Cheung, UBC law student 

“AI is the elephant in the classroom—something we can’t afford to ignore. Being part of the UBC AI and Criminal Justice Initiative is an exciting opportunity to engage in meaningful dialogue about balancing AI’s potential benefits with its risks, and unpacking the complex impact of this evolving technology.”

– Isabelle Sweeney, UBC law student 

Key Dates:

  • October 29, 2024: UBC AI & Criminal Justice Initiative launches
  • November 19, 2024: AI & Criminal Justice: Primer released 
  • January 8, 2025:Launch event at the Peter A. Allard School of Law (hybrid) – More Info & RSVP
    • AI & Criminal Justice: Cases and Commentary released 
    • Launch of new AI & Criminal Justice Seminar
    • Announcement of the AI & Law Student Symposium (April 2, 2025) and call for proposals
  • February 14, 2025: Proposal deadline for AI & Law Student Symposium – Submit a Proposal
  • April 2, 2025: AI & Law Student Symposium (hybrid) More Info & RSVP

Timing is everything, eh? First, I’m sorry for posting this after the launch event took place on January 8, 2025.. Second, this line from Walls’ Q&A: “There’s a federal bill [Bill C-27] before parliament, but it was introduced in 2022 and still hasn’t passed.” should read (after Prime Minister Justin Trudeau’s January 6, 2025 resignation and prorogation of Parliament) “… and now probably won’t be passed.” At the least this turn of events should make for some interesting speculation amongst the experts and the students.

As for anyone who’s interested in robots and their rights, there’s this August 1, 2023 posting “Should robots have rights? Confucianism offers some ideas” featuring Carnegie Mellon University’s Tae Wan Kim (profile).

FrogHeart’s 2024 comes to an end as 2025 comes into view

First, thank you to anyone who’s dropped by to read any of my posts. Second, I didn’t quite catch up on my backlog in what was then the new year (2024) despite my promises. (sigh) I will try to publish my drafts in a more timely fashion but I start this coming year as I did 2024 with a backlog of two to three months. This may be my new normal.

As for now, here’s an overview of FrogHeart’s 2024. The posts that follow are loosely organized under a heading but many of them could fit under other headings as well. After my informal review, there’s some material on foretelling the future as depicted in an exhibition, “Oracles, Omens and Answers,” at the Bodleian Libraries, University of Oxford.

Human enhancement: prosthetics, robotics, and more

Within a year or two of starting this blog I created a tag ‘machine/flesh’ to organize information about a number of converging technologies such as robotics, brain implants, and prosthetics that could alter our concepts of what it means to be human. The larger category of human enhancement functions in much the same way also allowing a greater range of topics to be covered.

Here are some of the 2024 human enhancement and/or machine/flesh stories on this blog,

Other species are also being rendered ‘machine/flesh’,

The year of the hydrogel?

It was the year of the hydrogel for me (btw, hydrogels are squishy materials; I have more of a description after this list),

As for anyone who’s curious about hydrogels, there’s this from an October 20, 2016 article by D.C.Demetre for ScienceBeta, Note: A link has been removed,

Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%.

It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.

These days hydrogels can be found in many everyday products, from disposable nappies and soft contact lenses to plant-water crystals. But the history of hydrogels for medical applications started in the 1960s.

Scientists developed artificial materials with the ambitious goal of using them in permanent contact applications , ones that are implanted in the body permanently.

For anyone who wants a more technical explanation, there’s the Hydrogel entry on Wikipedia.

Science education and citizen science

Where science education is concerned I’m seeing some innovative approaches to teaching science, which can include citizen science. As for citizen science (also known as, participatory science) I’ve been noticing heightened interest at all age levels.

Artificial intelligence

It’s been another year where artificial intelligence (AI) has absorbed a lot of energy from nearly everyone. I’m highlighting the more unusual AI stories I’ve stumbled across,

As you can see, I’ve tucked in two tangentially related stories, one which references a neuromorphic computing story ((see my Neuromorphic engineering category or search for ‘memristors’ in the blog search engine for more on brain-like computing topics) and the other is intellectual property. There are many, many more stories on these topics

Art/science (or art/sci or sciart)

It’s a bit of a surprise to see how many art/sci stories were published here this year, although some might be better described as art/tech stories.

There may be more 2024 art/sci stories but the list was getting long. In addition to searching for art/sci on the blog search engine, you may want to try data sonification too.

Moving off planet to outer space

This is not a big interest of mine but there were a few stories,

A writer/blogger’s self-indulgences

Apparently books can be dangerous and not in a ‘ban [fill in the blank] from the library’ kind of way,

Then, there are these,

New uses for electricity,

Given the name for this blog, it has to be included,

  • Frog saunas published September 15, 2024, this includes what seems to be a mild scientific kerfuffle

I’ve been following Lomiko Metals (graphite mining) for a while,

Who would have guessed?

Another bacteria story,

New crimes,

Origins of life,

Dirt

While no one year features a large number of ‘dirt’ stories, it has been a recurring theme here throughout the years,

Regenerative medicine

In addition to or instead of using the ‘regenerative medicine’ tag, I might use ’tissue engineering’ or ’tissue scaffolding’,

To sum it up

It was an eclectic year.

Peering forward into 2025 and futurecasting

I expect to be delighted, horrified, thrilled, and left shaking my head by science stories in 2025. Year after year the world of science reveals a world of wonder.

More mundanely, I can state with some confidence that my commentary (mentioned in the future-oriented subsection of my 2023 review and 2024 look forward) on Quantum Potential, a 2023 report from the Council of Canadian Academies, will be published early in this new year as I’ve almost finished writing it.

As for more about the future, I’ve got this, from a December 3, 2024 essay (Five ways to predict the future from around the world – from spider divination to bibliomancy) about an exhibition by Michelle Aroney (Research Fellow in Early Modern History, University of Oxford) and David Zeitlyn (Professor of Social Anthropology, University of Oxford) in The Conversation (h/t December 3, 2024 news item on phys.org), Note: Links have been removed

Some questions are hard to answer and always have been. Does my beloved love me back? Should my country go to war? Who stole my goats?

Questions like these have been asked of diviners around the world throughout history – and still are today. From astrology and tarot to reading entrails, divination comes in a wide variety of forms.

Yet they all address the same human needs. They promise to tame uncertainty, help us make decisions or simply satisfy our desire to understand.

Anthropologists and historians like us study divination because it sheds light on the fears and anxieties of particular cultures, many of which are universal. Our new exhibition at Oxford’s Bodleian Library, Oracles, Omens & Answers, explores these issues by showcasing divination techniques from around the world.

1. Spider divination

In Cameroon, Mambila spider divination (ŋgam dù) addresses difficult questions to spiders or land crabs that live in holes in the ground.

Asking the spiders a question involves covering their hole with a broken pot and placing a stick, a stone and cards made from leaves around it. The diviner then asks a question in a yes or no format while tapping the enclosure to encourage the spider or crab to emerge. The stick and stone represent yes or no, while the leaf cards, which are specially incised with certain meanings, offer further clarification.

2. Palmistry

Reading people’s palms (palmistry) is well known as a fairground amusement, but serious forms of this divination technique exist in many cultures. The practice of reading the hands to gather insights into a person’s character and future was used in many ancient cultures across Asia and Europe.

In some traditions, the shape and depth of the lines on the palm are richest in meaning. In others, the size of the hands and fingers are also considered. In some Indian traditions, special marks and symbols appearing on the palm also provide insights.

Palmistry experienced a huge resurgence in 19th-century England and America, just as the science of fingerprints was being developed. If you could identify someone from their fingerprints, it seemed plausible to read their personality from their hands.

3. Bibliomancy

If you want a quick answer to a difficult question, you could try bibliomancy. Historically, this DIY [do-it-yourself] divining technique was performed with whatever important books were on hand.

Throughout Europe, the works of Homer or Virgil were used. In Iran, it was often the Divan of Hafiz, a collection of Persian poetry. In Christian, Muslim and Jewish traditions, holy texts have often been used, though not without controversy.

4. Astrology

Astrology exists in almost every culture around the world. As far back as ancient Babylon, astrologers have interpreted the heavens to discover hidden truths and predict the future.

5. Calendrical divination

Calendars have long been used to divine the future and establish the best times to perform certain activities. In many countries, almanacs still advise auspicious and inauspicious days for tasks ranging from getting a haircut to starting a new business deal.

In Indonesia, Hindu almanacs called pawukon [calendar] explain how different weeks are ruled by different local deities. The characteristics of the deities mean that some weeks are better than others for activities like marriage ceremonies.

You’ll find logistics for the exhibition in this September 23, 2024 Bodleian Libraries University of Oxford press release about the exhibit, Note: Links have been removed,

Oracles, Omens and Answers

6 December 2024 – 27 April 2025
ST Lee Gallery, Weston Library

The Bodleian Libraries’ new exhibition, Oracles, Omens and Answers, will explore the many ways in which people have sought answers in the face of the unknown across time and cultures. From astrology and palm reading to weather and public health forecasting, the exhibition demonstrates the ubiquity of divination practices, and humanity’s universal desire to tame uncertainty, diagnose present problems, and predict future outcomes.

Through plagues, wars and political turmoil, divination, or the practice of seeking knowledge of the future or the unknown, has remained an integral part of society. Historically, royals and politicians would consult with diviners to guide decision-making and incite action. People have continued to seek comfort and guidance through divination in uncertain times — the COVID-19 pandemic saw a rise in apps enabling users to generate astrological charts or read the Yijing [I Ching], alongside a growth in horoscope and tarot communities on social media such as ‘WitchTok’. Many aspects of our lives are now dictated by algorithmic predictions, from e-health platforms to digital advertising. Scientific forecasters as well as doctors, detectives, and therapists have taken over many of the societal roles once held by diviners. Yet the predictions of today’s experts are not immune to criticism, nor can they answer all our questions.

Curated by Dr Michelle Aroney, whose research focuses on early modern science and religion, and Professor David Zeitlyn, an expert in the anthropology of divination, the exhibition will take a historical-anthropological approach to methods of prophecy, prediction and forecasting, covering a broad range of divination methods, including astrology, tarot, necromancy, and spider divination.

Dating back as far as ancient Mesopotamia, the exhibition will show us that the same kinds of questions have been asked of specialist practitioners from around the world throughout history. What is the best treatment for this illness? Does my loved one love me back? When will this pandemic end? Through materials from the archives of the Bodleian Libraries alongside other collections in Oxford, the exhibition demonstrates just how universally human it is to seek answers to difficult questions.

Highlights of the exhibition include: oracle bones from Shang Dynasty China (ca. 1250-1050 BCE); an Egyptian celestial globe dating to around 1318; a 16th-century armillary sphere from Flanders, once used by astrologers to place the planets in the sky in relation to the Zodiac; a nineteenth-century illuminated Javanese almanac; and the autobiography of astrologer Joan Quigley, who worked with Nancy and Ronald Reagan in the White House for seven years. The casebooks of astrologer-physicians in 16th- and 17th-century England also offer rare insights into the questions asked by clients across the social spectrum, about their health, personal lives, and business ventures, and in some cases the actions taken by them in response.

The exhibition also explores divination which involves the interpretation of patterns or clues in natural things, with the idea that natural bodies contain hidden clues that can be decrypted. Some diviners inspect the entrails of sacrificed animals (known as ‘extispicy’), as evidenced by an ancient Mesopotamian cuneiform tablet describing the observation of patterns in the guts of birds. Others use human bodies, with palm readers interpreting characters and fortunes etched in their clients’ hands. A sketch of Oscar Wilde’s palms – which his palm reader believed indicated “a great love of detail…extraordinary brain power and profound scholarship” – shows the revival of palmistry’s popularity in 19th century Britain.

The exhibition will also feature a case study of spider divination practised by the Mambila people of Cameroon and Nigeria, which is the research specialism of curator Professor David Zeitlyn, himself a Ŋgam dù diviner. This process uses burrowing spiders or land crabs to arrange marked leaf cards into a pattern, which is read by the diviner. The display will demonstrate the methods involved in this process and the way in which its results are interpreted by the card readers. African basket divination has also been observed through anthropological research, where diviners receive answers to their questions in the form of the configurations of thirty plus items after they have been tossed in the basket.

Dr Michelle Aroney and Professor David Zeitlyn, co-curators of the exhibition, say:

Every day we confront the limits of our own knowledge when it comes to the enigmas of the past and present and the uncertainties of the future. Across history and around the world, humans have used various techniques that promise to unveil the concealed, disclosing insights that offer answers to private or shared dilemmas and help to make decisions. Whether a diviner uses spiders or tarot cards, what matters is whether the answers they offer are meaningful and helpful to their clients. What is fun or entertainment for one person is deadly serious for another.

Richard Ovenden, Bodley’s [a nickname? Bodleian Libraries were founded by Sir Thomas Bodley] Librarian, said:

People have tried to find ways of predicting the future for as long as we have had recorded history. This exhibition examines and illustrates how across time and culture, people manage the uncertainty of everyday life in their own way. We hope that through the extraordinary exhibits, and the scholarship that brings them together, visitors to the show will appreciate the long history of people seeking answers to life’s biggest questions, and how people have approached it in their own unique way.

The exhibition will be accompanied by the book Divinations, Oracles & Omens, edited by Michelle Aroney and David Zeitlyn, which will be published by Bodleian Library Publishing on 5 December 2024.

Courtesy: Bodleian Libraries, University of Oxford

I’m not sure why the preceding image is used to illustrate the exhibition webpage but I find it quite interesting. Should you be in Oxford, UK and lucky enough to visit the exhibition, there are a few more details on the Oracles, Omens and Answers event webpage, Note: There are 26 Bodleian Libraries at Oxford and the exhibition is being held in the Weston Library,

EXHIBITION

Oracles, Omens and Answers

6 December 2024 – 27 April 2025

ST Lee Gallery, Weston Library

Free admission, no ticket required

Note: This exhibition includes a large continuous projection of spider divination practice, including images of the spiders in action.

Exhibition tours

Oracles, Omens and Answers exhibition tours are available on selected Wednesdays and Saturdays from 1–1.45pm and are open to all.

These free gallery tours are led by our dedicated volunteer team and places are limited. Check available dates and book your tickets.

You do not need to book a tour to visit the exhibition. Please meet by the entrance doors to the exhibition at the rear of Blackwell Hall.

Happy 2025! And, once again, thank you.

Metacrime: the line between the virtual and reality

An August 15, 2024 Griffith University (Australia) press release (also on EurekAlert) presents research on a relatively new type of crime, Note: A link has been removed,

If you thought your kids were away from harm playing multi-player games through VR headsets while in their own bedrooms, you may want to sit down to read this.

Griffith University’s Dr Ausma Bernot teamed up with researchers from Monash University, Charles Sturt University and University of Technology Sydney to investigate what has been termed as ‘metacrime’ – attacks, crimes or inappropriate activities that occur within virtual reality environments.

The ‘metaverse’ refers to the virtual world, where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.

While the metaverse can be used for anything from meetings (where it will feel as though you are in the same room as avatars of other people instead of just seeing them on a screen) to wandering through national parks around the world without leaving your living room, gaming is by far its most popular use.   

Dr Bernot said the technology had evolved incredibly quickly.

“Using this technology is super fun and it’s really immersive,” she said.

“You can really lose yourself in those environments.

“Unfortunately, while those new environments are very exciting, they also have the potential to enable new crimes.

“While the headsets that enable us to have these experiences aren’t a commonly owned item yet, they’re growing in popularity and we’ve seen reports of sexual harassment or assault against both adults and kids.”

In a December 2023 report, the Australian eSafety Commissioner estimated around 680,000 adults in Australia are engaged in the metaverse.

This followed a survey conducted in November and December 2022 by researchers from the UK’s Center for Countering Digital Hate, who conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat.

The researchers found most users had been faced with at least one negative experience in the virtual environment, including being called offensive names, receiving repeated unwanted messages or contact, being provoked to respond to something or to start an argument, being challenged about cultural identity or being sent unwanted inappropriate content.

Eleven per cent had been exposed to a sexually graphic virtual space and nine per cent had been touched (virtually) in a way they didn’t like.

Of these respondents, 49 per cent said the experience had a moderate to extreme impact on their mental or emotional wellbeing.

With the two largest user groups being minors and men, Dr Bernot said it was important for parents to monitor their children’s activity or consider limiting their access to multi-player games.

“Minors are more vulnerable to grooming and other abuse,” she said.

“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games, or of course the simple ability to just take the headset off, once immersed in these environments it does feel very real.

“It’s somewhere in between a physical attack and for example, a social media harassment message – you’ll still feel that distress and it can take a significant toll on a user’s wellbeing.

“It is a real and palpable risk.”

Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.

“Before the emergence of the metaverse we could not have imagined how rape could be virtual,” Mr Zhou said.

“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality.

“While there may not be physical contact, victims – mostly young girls – strongly claim the feeling of victimisation was real.

“Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”

With use of the metaverse expected to grow exponentially in coming years, the research team’s findings highlight a need for metaverse companies to instil clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.

Here’s a link to and a citation for the paper,

Metacrime and Cybercrime: Exploring the Convergence and Divergence in Digital Criminality by You Zhou, Milind Tiwari, Ausma Bernot & Kai Lin. Asian Journal of Criminology 19, 419–439 (2024) DOI: https://doi.org/10.1007/s11417-024-09436-y Published online: 09 August 2024 Issue Date: September 2024

This paper is open access.

Submit abstracts by Jan. 31 for 2025 Governance of Emerging Technologies & Science (GETS) Conference at Arizona State U

This call for abstracts from Arizona State University (ASU) for the Twelfth Annual Governance of Emerging Technologies and Science (GETS) Conference was received via email,

GETS 2025: Call for abstracts

Save the date for the Twelfth Annual Governance of Emerging Technologies and Science Conference, taking place May 19 and 20, 2025 at the Sandra Day O’Connor College of Law at Arizona State University in Phoenix, AZ. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including:

National security
Nanotechnology
Quantum computing
Autonomous vehicles
3D printing
Robotics
Synthetic biology
Gene editing
Artificial intelligence
Biotechnology

Genomics
Internet of things (IoT)
Autonomous weapon systems
Personalized medicine
Neuroscience
Digital health
Human enhancement
Telemedicine
Virtual reality
Blockchain

Call for abstracts: The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested.

  • Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above
  • Abstracts should not exceed 500 words and must contain your name and email address
  • Abstracts must be submitted by Friday, January 31, 2025, to be considered

Submit your abstract

For more information contact Eric Hitchcock.

Good luck!

AI and Canadian science diplomacy & more stories from the October 2024 Council of Canadian Academies (CCA) newsletter

The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,

Artificial Intelligence and Canada’s Science Diplomacy Future

For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.

“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”

On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:

  • Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
  • David Barnes, head of the British High Commission Science, Climate, and Energy Team
  • Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
  • Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
  • Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada

For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to  Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.

I have checked out the panel’s session page,

448: Strategy and Influence: AI and Canada’s Science Diplomacy Future

Friday, November 22 [2024]
1:00 pm – 2:30 pm EST

Science and International Affairs and Security

About

Organized By: Council of Canadian Academies (CCA)

Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.

The newsletter also provides links to additional readings on various topics, here are the AI items,

In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.

A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]

In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”

I clicked the link to read the Trudeau/Macron announcement and found this September 26, 2024 Innovation, Science and Economic Development Canada news release,

Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).

Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.

Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.

We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.

Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.

We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.

Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.

We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.

Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.

We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.

Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.

Some very interesting news and it reminded me of this October 10, 2024 posting “October 29, 2024 Woodrow Wilson Center event: 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability.” (I also included an update of the current state of Canadian legislation and artificial intelligence in the posting.)

I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,

Honoring the Life and Legacy of a Leader in AI Ethics

In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.


Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.

Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work. 

The Beginnings: Building a Global AI Ethics Community

Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.

When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.

I offer my sympathies to his family, friends, and communities for their profound loss.

October 29, 2024 Woodrow Wilson Center event: 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability

An October 9, 2024 notice from the Wilson Center (or Woodrow Wilson Center or Woodrow Wilson International Center for Scholars received via email) announces an annual event, which this year will focus on AI (artificial intelligence),

The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability

Tuesday
Oct. 29, 2024
9:30am – 2:00pm ET
6th Floor Flom Auditorium, Woodrow Wilson Center

Time is running out to RSVP for the 2024 Canada-US Legal Symposium!

This year’s program will address artificial intelligence (AI) governance, regulation, and liability. High-profile advances in AI over the past four years have raised serious legal questions about the development, integration, and use of the technology. Canada and the United States, longtime leaders in innovation and hubs for some of the world’s top AI companies, are poised to lead in developing a model for responsible AI policy.

This event is co-organized with the Science, Technology, and Innovation Program and the Canada-US Law Institute.

The event page for The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability gives you the option of an RSVP to attend the virtual or in-person event.

For more about international AI usage and regulation efforts, there’s the Wilson Center’s Science and Technology Innovation Program CTRL Forward blog. Here’s a sampling of some of the most recent postings, Note: CTRL Forward postings cover a wide range of science/technology topics often noting how the international scene is affected; it seems September saw a major focus on AI

For anyone curious about the current state of Canadian legislation and artificial intelligence, I have a May 1, 2023 posting which offers an overview of the current state of affairs, (Note: The bill has yet to be passed)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

The omnibus bill, C-27, which includes Artificial Intelligence and Data Act (AIDA) had passed its second reading in the House of Commons at the time of the posting. Since May 2023, the bill has been the subject of the House of Commons Standing Committee on Industry and Technology according to the Parliament of Canada’s LEGISinfo’s C-27 , 44th Parliament, 1st session Monday, November 22, 2021, to present: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts webpage.

You can find more up-to-date information about the status of the Committee’s Bill-27 meetings on this webpage where it appears that September 26, 2024 was the committee’s most recent meeting. If you click on the highlighted meeting dates, you will be given the option of watching a webcast of the meeting. The webpage will also give you access to a list of witnesses, the briefs and the briefs themselves.

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Implantable brain-computer interface collaborative community (iBCI-CC) launched

That’s quite a mouthful, ‘implantable brain-computer interface collaborative community (iBCI-CC). I assume the organization will be popularly known by its abbreviation.`A March 11, 2024 Mass General Brigham news release (also on EurekAlert) announces the iBCI-CC’s launch, Note: Mass stands for Massachusetts,

Mass General Brigham is establishing the Implantable Brain-Computer Interface Collaborative Community (iBCI-CC). This is the first Collaborative Community in the clinical neurosciences that has participation from the U.S. Food and Drug Administration (FDA).

BCIs are devices that interface with the nervous system and use software to interpret neural activity. Commonly, they are designed for improved access to communication or other technologies for people with physical disability. Implantable BCIs are investigational devices that hold the promise of unlocking new frontiers in restorative neurotechnology, offering potential breakthroughs in neurorehabilitation and in restoring function for people living with neurologic disease or injury.

The iBCI-CC (https://www.ibci-cc.org/) is a groundbreaking initiative aimed at fostering collaboration among diverse stakeholders to accelerate the development, safety and accessibility of iBCI technologies. The iBCI-CC brings together researchers, clinicians, medical device manufacturers, patient advocacy groups and individuals with lived experience of neurological conditions. This collaborative effort aims to propel the field of iBCIs forward by employing harmonized approaches that drive continuous innovation and ensure equitable access to these transformative technologies.

One of the first milestones for the iBCI-CC was to engage the participation of the FDA. “Brain-computer interfaces have the potential to restore lost function for patients suffering from a variety of neurological conditions. However, there are clinical, regulatory, coverage and payment questions that remain, which may impede patient access to this novel technology,” said David McMullen, M.D., Director of the Office of Neurological and Physical Medicine Devices in the FDA’s Center for Devices and Radiological Health (CDRH), and FDA member of the iBCI-CC. “The IBCI-CC will serve as an open venue to identify, discuss and develop approaches for overcoming these hurdles.”

The iBCI-CC will hold regular meetings open both to its members and the public to ensure inclusivity and transparency. Mass General Brigham will serve as the convener of the iBCI-CC, providing administrative support and ensuring alignment with the community’s objectives.

Over the past year, the iBCI-CC was organized by the interdisciplinary collaboration of leaders including Leigh Hochberg, MD, PhD, an internationally respected leader in BCI development and clinical testing and director of the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital; Jennifer French, MBA, executive director of the Neurotech Network and a Paralympic silver medalist; and Joe Lennerz, MD, PhD, a regulatory science expert and director of the Pathology Innovation Collaborative Community. These three organizers lead a distinguished group of Charter Signatories representing a diverse range of expertise and organizations.

“As a neurointensive care physician, I know how many patients with neurologic disorders could benefit from these devices,” said Dr. Hochberg. “Increasing discoveries in academia and the launch of multiple iBCI and related neurotech companies means that the time is right to identify common goals and metrics so that iBCIs are not only safe and effective, but also have thoroughly considered the design and function preferences of the people who hope to use them”.

Jennifer French, said, “Bringing diverse perspectives together, including those with lived experience, is a critical component to help address complex issues facing this field.” French has decades of experience working in the neurotech and patient advocacy fields. Living with a spinal cord injury, she also uses an implanted neurotech device for daily functions. “This ecosystem of neuroscience is on the cusp to collectively move the field forward by addressing access to the latest groundbreaking technology, in an equitable and ethical way. We can’t wait to engage and recruit the broader BCI community.”

Joe Lennerz, MD, PhD, emphasized, “Engaging in pre-competitive initiatives offers an often-overlooked avenue to drive meaningful progress. The collaboration of numerous thought leaders plays a pivotal role, with a crucial emphasis on regulatory engagement to unlock benefits for patients.”

The iBCI-CC is supported by key stakeholders within the Mass General Brigham system. Merit Cudkowicz, MD, MSc, chair of the Neurology Department, director of the Sean M. Healey and AMG Center for ALS at Massachusetts General Hospital, and Julianne Dorn Professor of Neurology at Harvard Medical School, said, “There is tremendous excitement in the ALS [amyotrophic lateral sclerosis, or Lou Gehrig’s disease] community for new devices that could ease and improve the ability of people with advanced ALS to communicate with their family, friends, and care partners. This important collaborative community will help to speed the development of a new class of neurologic devices to help our patients.”

Bailey McGuire, program manager of strategy and operations at Mass General Brigham’s Data Science Office, said, “We are thrilled to convene the iBCI-CC at Mass General Brigham’s DSO. By providing an administrative infrastructure, we want to help the iBCI-CC advance regulatory science and accelerate the availability of iBCI solutions that incorporate novel hardware and software that can benefit individuals with neurological conditions. We’re excited to help in this incredible space.”

For more information about the iBCI-CC, please visit https://www.ibci-cc.org/.

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

About the iBCI-CC Organizers:

Leigh Hochberg, MD, PhD is a neurointensivist at Massachusetts General Hospital’s Department of Neurology, where he directs the MGH Center for Neurotechnology and Neurorecovery. He is also the IDE Sponsor-Investigator and Directorof the BrainGate clinical trials, conducted by a consortium of scientists and clinicians at Brown, Emory, MGH, VA Providence, Stanford, and UC-Davis; the L. Herbert Ballou University Professor of Engineering and Professor of Brain Science at Brown University; Senior Lecturer on Neurology at Harvard Medical School; and Associate Director, VA RR&D Center for Neurorestoration and Neurotechnology in Providence.

Jennifer French, MBA, is the Executive Director of Neurotech Network, a nonprofit organization that focuses on education and advocacy of neurotechnologies. She serves on several Boards including the IEEE Neuroethics Initiative, Institute of Neuroethics, OpenMind platform, BRAIN Initiative Multi-Council and Neuroethics Working Groups, and the American Brain Coalition. She is the author of On My Feet Again (Neurotech Press, 2013) and is co-author of Bionic Pioneers (Neurotech Press, 2014). French lives with tetraplegia due to a spinal cord injury. She is an early user of an experimental implanted neural prosthesis for paralysis and is the Past-President and Founding member of the North American SCI Consortium.

Joe Lennerz, MD PhD, serves as the Chief Scientific Officer at BostonGene, an AI analytics and genomics startup based in Boston. Dr. Lennerz obtained a PhD in neurosciences, specializing in electrophysiology. He works on biomarker development and migraine research. Additionally, he is the co-founder and leader of the Pathology Innovation Collaborative Community, a regulatory science initiative focusing on diagnostics and software as a medical device (SaMD), convened by the Medical Device Innovation Consortium. He also serves as the co-chair of the federal Clinical Laboratory Fee Schedule (CLFS) advisory panel to the Centers for Medicare & Medicaid Services (CMS).

it’s been a while since I’ve come across BrainGate (see Leigh Hochberg bio in the above news release), which was last mentioned here in an April 2, 2021 posting, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI).”

Here are two of my more recent postings about brain-computer interfaces,

This next one is an older posting but perhaps the most relevant to the announcement of this collaborative community’s purpose,

There’s a lot more on brain-computer interfaces (BCI) here, just use the term in the blog search engine.

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.