Tag Archives: Diane Coyle

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

The UK’s Futurefest and an interview with Sue Thomas

Futurefest with “some of the planet’s most radical thinkers, makers and performers” is taking place in London next weekend on Sept. 28 – 29, 2013 and  I am very pleased to be featuring an interview with one of  Futurefest’s speakers, Sue Thomas who amongst many other accomplishments was also the founder of the  Creative Writing and New Media programme at De Montfort University, UK, where I got my master’s degree.

Here’s Sue,

suethomas

Sue Thomas was formerly Professor of New Media at De Montfort University. Now she writes and consults on digital well-being. Her new book ‘Technobiophilia: nature and cyberspace’ explains how contact with the natural world can help soothe our connected lives.http://www.suethomas.net @suethomas

  • I understand you are participating in Futurefest’s SciFi Writers’ Parliament; could you explain what that is and what the nature of your participation will be?

The premise of the session is to invite Science Fiction writers to play with the idea that they have been given the power to realise the kinds of new societies and cultures they imagine in their books. Each of us will present a brief proposal for the audience to vote on. The panel will be chaired by Robin Ince, a well-known comedian, broadcaster, and science enthusiast. The presenters are Cory Doctorow, Pat Cadigan, Ken MacLeod, Charles Stross, Roz Kaveney and myself.

  • Do you have expectations for who will be attending ‘Parliament’ and will they be participating as well as watching?

I’m expecting the audience for FutureFest http://www.futurefest.org/ to be people interested in future forecasting across the four themes of the event: Well-becoming, In the imaginarium,  We are all gardeners now, and The value of everything. There are plenty of opportunities for them to participate, not just in discussing and voting in panels like ours, but also in The Daily Future, a Twitter game, and Playify, which will run around and across the weekend. 

  • How are you preparing for ‘Parliament’?

 I will propose A Global Environmental Protection Act for Cyberspace The full text of the proposal is  on my blog here http://suethomasnet.wordpress.com/2013/09/05/futurefest/ It’s based on the thinking and research around my new book Technobiophilia: nature and cyberspace http://suethomasnet.wordpress.com/technobiophilia/ which coincidentally comes out in the UK two days before FutureFest. In the runup to the event I’ll also be gathering peoples’ views and refining my thoughts.

sue thomas_technobiophilia

  • Is there any other event you’re looking forward to in particular and why would that be?

The whole of FutureFest looks great and I’m excited about being there all weekend to enjoy it. The following week I’m doing a much smaller but equally interesting event at my local Cafe Scientifique, which is celebrating its first birthday with a talk from me about Technobiophilia. I’ve only recently moved to Bournemouth so this will be a great chance to meet the kinds of interesting local people who come to Cafe Scientifique in all parts of the world. http://suethomasnet.wordpress.com/2013/09/12/cafe-scientifique/

 

I’ll also be launching the book in North America with an online lecture in the Metaliteracy MOOC at SUNY Empire State University. The details are yet to be released but it’s booked for 18 November. http://metaliteracy.cdlprojects.com/index.html

  • Is there anything you’d like to add?

I’m also doing another event at FutureFest which might be of interest, especially to people interested in the future of death. It’s called xHumed and this is what it’s about: If we can archive and store our personal data, media, DNA and brain patterns, the question of whether we can bring back the dead is almost redundant. The right question is should we? It is the year 2050AD and great thought leaders from history have been “xHumed”. What could possibly go wrong? Through an interactive performance Five10Twelve will provoke and encourage the audience to consider the implications via soundbites and insights from eminent experts – both living and dead. I’m expecting some lively debate!

Thank you,  Sue for bringing Futurefest to life and congratulations on your new book!

You can find out more about Futurefest and its speakers here at the Futurefest website. I found Futurefest’s ticket webpage (which is associated with the National Theatre) a little more  informative about the event as a whole,

Some of the planet’s most radical thinkers, makers and performers are gathering in East London this September to create an immersive experience of what the world will feel like over the next few decades.

From the bright and uplifting to the dark and dystopian, FutureFest will present a weekend of compelling talks, cutting-edge shows, and interactive performances that will inspire and challenge you to change the future.

Enter the wormhole in Shoreditch Town Hall on the weekend of 28 and 29 September 2013 and experience the next phase of being human.

FutureFest is split into four sessions, Saturday Morning, Saturday Afternoon, Sunday Morning and Sunday Afternoon. You can choose to come to one, two, three or all sessions. They all have a different flavour, but each one will immerse you deep in the future.

Please note that FutureFest is a living, breathing festival so sessions are subject to change. We’ll keep you up to date on our FutureFest website.

Saturday Morning will feature The Blind Giant author Nick Harkaway, bionic man Bertolt Meyer and techno-cellist Peter Gregson. There will also be secret agents, villages of the future and a crowd-sourced experiment in futurology with some dead futurists.

Saturday Afternoon has forecaster Tamar Kasriel helping to futurescape your life, and gamemaker Alex Fleetwood showing us what life will be like in the Gameful century. We’ve got top political scientists David Runciman and Diane Coyle exploring the future of democracy. There will also be a mass-deception experiment, more secret agents and a look forward to what the weather will be like in 2100.

Sunday Morning sees Sermons of the Future. Taking the pulpit will be Wikipedia’s Jimmy Wales, social entrepreneur and model Lily Cole, and Astronomer Royal Martin Rees. Meanwhile the comedian Robin Ince will be chairing a Science Fiction Parliament with top SF authors, Roberto Unger will be analysing the future of religion and one of the world’s top chefs, Andoni Aduriz, will be exploring how food will make us feel in the future.

Sunday Afternoon will feature a futuristic take on the Sunday lunch, with food futurologist Morgaine Gaye inviting you for lunch in the Gastrodome with insects and 3D meat print-outs on the menu. Smari McCarthy, founder of Iceland’s Pirate Party and Wikileaks worker, will be exploring life in a digitised world, and Charlie Leadbeater, Diane Coyle and Mark Stevenson will be imagining cities and states of the future.

I noticed that a few Futurefest speakers have been featured here:

Eric Drexler, ‘Mr. Nano’, was last mentioned in a May 6, 2013 posting about a talk he was giving in Seattle, Washington to promote his new book, Radical Abundance.

Martin Rees, Emeritus Professor of Cosmology and Astrophysics, was mentioned in a Nov. 26, 3012 posting about the Cambridge Project for Existential Risk (humans relative to robots).

Bertolt Meyer, a young researcher from Zurich University and a lifelong user of prosthetic technology, in a Jan. 30, 2013 posting about building a bionic man.

Cory Doctorow, a science fiction writer, who ran afoul of James Moore, then Minister of Canadian Heritage and now Minister of Industry Canada, who accused him of being a ‘radical extremists’  prior to new copyright legislation  for Canadians, was mentioned in a June 25, 2010 posting.

Wish I could be at London’s Futurefest in lieu of that I will wish the organizers and participants all the best.

* On a purely cosmetic note, on Dec. 5, 2013, I changed the paragraph format in the responses.