Category Archives: risk

Smart toys spying on children?

Caption: Twelve toys were examined in a study on smart toys and privacy. Credit: University of Basel / Céline Emch

An August 26, 2024 University of Basel press release (also on EurekAlert) describes research into smart toys and privacy issues for the children who play with them,

Toniebox, Tiptoi, and Tamagotchi are smart toys, offering interactive play through software and internet access. However, many of these toys raise privacy concerns, and some even collect extensive behavioral data about children, report researchers at the University of Basel, Switzerland.

The Toniebox and the figurines it comes with are especially popular with small children. They’re much easier to use than standard music players, allowing kids to turn on music and audio content themselves whenever they want. All a child has to do is place a plastic version of Peppa Pig onto the box and the story starts to play. When the child wants to stop the story, they simply remove the figurine. To rewind and fast-forward, the child can tilt the box to the left or right, respectively.

A lot of parents are probably thinking, “Fantastic concept!” Not so fast – the Toniebox records exactly when it is activated and by which figurine, when the child stops playback, and to which spot they rewind or fast-forward. Then it sends the data to the manufacturer.

The Toniebox is one of twelve smart toys studied by researchers headed by Professor Isabel Wagner of the Department of Mathematics and Computer Science at the University of Basel. These included well-known toys like the Tiptoi smart pen, the Edurino learning app, and the Tamagotchi virtual pet as well as the Toniebox. The researchers also studied less well-known products like the Moorebot, a mobile robot with a camera and microphone, and Kidibuzz, a smartphone for kids with parental controls.

One focus of the analysis was security: is data traffic encrypted, and how well? The researchers also investigated data protection, transparency (how easy it is for users to find out what data is collected), and compliance with the EU General Data Protection Regulation. Wagner and her colleagues are presenting their results at the Annual Privacy Forum (https://privacyforum.eu/) in early September [2024]. Springer publishes all the conference contributions in the series Privacy Technologies and Policy.

Collect data while offline, send it while online

Neither the Toniebox nor the Tiptoi pen come out well with respect to security, as they do not securely encrypt data traffic. The two toys differ with regard to privacy concerns, though: While the Toniebox does collect data and send it to the manufacturer, the Tiptoi pen does not record how and when a child uses it.

Even if the Toniebox were operated offline and only temporarily connected to the internet while downloading new audio content, the device could store collected data locally and transmit it to the manufacturer at the next opportunity, Wagner surmises. “In another toy we’re currently studying that integrates ChatGPT, we’re seeing that log data regularly vanishes.” The system is probably set up to delete the local copy of transmitted data to optimize internal storage use, Wagner says.

Companies often claim the collected data helps them optimize their devices. Yet it is far from obvious to users what purpose this data could serve. “The apps bundled with some of these toys demand entirely unnecessary access rights, such as to a smartphone’s location or microphone,” says the researcher. The ChatGPT toy still being analyzed also transmits a data stream that looks like audio. Perhaps the company wants to optimize speech recognition for children’s voices, the Professor of Cyber Security speculates.

A data protection label

“Children’s privacy requires special protection,” emphasizes Julika Feldbusch, first author of the study. She argues that toy manufacturers should place greater weight on privacy and on the security of their products than they currently do in light of their young target audience.

The researchers recommend that compliance with security and data protection standards be identified by a label on the packaging, similar to nutritional information on food items. Currently, it’s too difficult for parents to assess the security risks that smart toys pose to their children.

“We’re already seeing signs of a two-tier society when it comes to privacy protection for children,” says Feldbusch. “Well-informed parents engage with the issue and can choose toys that do not create behavioral profiles of their children. But many lack the technical knowledge or don’t have time to think about this stuff in detail.”

You could argue that individual children probably won’t experience negative consequences due to toy manufacturers creating profiles of them, says Wagner. “But nobody really knows that for sure. For example, constant surveillance can have negative effects on personal development.”

Here’s a link to and a citation for the paper,

No Transparency for Smart Toys by Julika Feldbusch, Valentyna Pavliv, Nima Akbari & Isabel Wagner. Privacy Technologies and Policy Conference paper (part of Annual Privacy Forum [series]: APF 2024; Part of the book series: Lecture Notes in Computer Science [LNCS,volume 14831]) First Online: 01 August 2024 pp 203–227

This paper is behind a paywall.

Metacrime: the line between the virtual and reality

An August 15, 2024 Griffith University (Australia) press release (also on EurekAlert) presents research on a relatively new type of crime, Note: A link has been removed,

If you thought your kids were away from harm playing multi-player games through VR headsets while in their own bedrooms, you may want to sit down to read this.

Griffith University’s Dr Ausma Bernot teamed up with researchers from Monash University, Charles Sturt University and University of Technology Sydney to investigate what has been termed as ‘metacrime’ – attacks, crimes or inappropriate activities that occur within virtual reality environments.

The ‘metaverse’ refers to the virtual world, where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.

While the metaverse can be used for anything from meetings (where it will feel as though you are in the same room as avatars of other people instead of just seeing them on a screen) to wandering through national parks around the world without leaving your living room, gaming is by far its most popular use.   

Dr Bernot said the technology had evolved incredibly quickly.

“Using this technology is super fun and it’s really immersive,” she said.

“You can really lose yourself in those environments.

“Unfortunately, while those new environments are very exciting, they also have the potential to enable new crimes.

“While the headsets that enable us to have these experiences aren’t a commonly owned item yet, they’re growing in popularity and we’ve seen reports of sexual harassment or assault against both adults and kids.”

In a December 2023 report, the Australian eSafety Commissioner estimated around 680,000 adults in Australia are engaged in the metaverse.

This followed a survey conducted in November and December 2022 by researchers from the UK’s Center for Countering Digital Hate, who conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat.

The researchers found most users had been faced with at least one negative experience in the virtual environment, including being called offensive names, receiving repeated unwanted messages or contact, being provoked to respond to something or to start an argument, being challenged about cultural identity or being sent unwanted inappropriate content.

Eleven per cent had been exposed to a sexually graphic virtual space and nine per cent had been touched (virtually) in a way they didn’t like.

Of these respondents, 49 per cent said the experience had a moderate to extreme impact on their mental or emotional wellbeing.

With the two largest user groups being minors and men, Dr Bernot said it was important for parents to monitor their children’s activity or consider limiting their access to multi-player games.

“Minors are more vulnerable to grooming and other abuse,” she said.

“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games, or of course the simple ability to just take the headset off, once immersed in these environments it does feel very real.

“It’s somewhere in between a physical attack and for example, a social media harassment message – you’ll still feel that distress and it can take a significant toll on a user’s wellbeing.

“It is a real and palpable risk.”

Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.

“Before the emergence of the metaverse we could not have imagined how rape could be virtual,” Mr Zhou said.

“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality.

“While there may not be physical contact, victims – mostly young girls – strongly claim the feeling of victimisation was real.

“Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”

With use of the metaverse expected to grow exponentially in coming years, the research team’s findings highlight a need for metaverse companies to instil clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.

Here’s a link to and a citation for the paper,

Metacrime and Cybercrime: Exploring the Convergence and Divergence in Digital Criminality by You Zhou, Milind Tiwari, Ausma Bernot & Kai Lin. Asian Journal of Criminology 19, 419–439 (2024) DOI: https://doi.org/10.1007/s11417-024-09436-y Published online: 09 August 2024 Issue Date: September 2024

This paper is open access.

Submit abstracts by Jan. 31 for 2025 Governance of Emerging Technologies & Science (GETS) Conference at Arizona State U

This call for abstracts from Arizona State University (ASU) for the Twelfth Annual Governance of Emerging Technologies and Science (GETS) Conference was received via email,

GETS 2025: Call for abstracts

Save the date for the Twelfth Annual Governance of Emerging Technologies and Science Conference, taking place May 19 and 20, 2025 at the Sandra Day O’Connor College of Law at Arizona State University in Phoenix, AZ. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including:

National security
Nanotechnology
Quantum computing
Autonomous vehicles
3D printing
Robotics
Synthetic biology
Gene editing
Artificial intelligence
Biotechnology

Genomics
Internet of things (IoT)
Autonomous weapon systems
Personalized medicine
Neuroscience
Digital health
Human enhancement
Telemedicine
Virtual reality
Blockchain

Call for abstracts: The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested.

  • Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above
  • Abstracts should not exceed 500 words and must contain your name and email address
  • Abstracts must be submitted by Friday, January 31, 2025, to be considered

Submit your abstract

For more information contact Eric Hitchcock.

Good luck!

Bio-hybrid robotics (living robots) needs public debate and regulation

A July 23, 2024 University of Southampton (UK) press release (also on EurekAlert but published July 22, 2024) describes the emerging science/technology of bio-hybrid robotics and a recent study about the ethical issues raised, Note 1: bio-hybrid may also be written as biohybrid; Note 2: Links have been removed,

Development of ‘living robots’ needs regulation and public debate

Researchers are calling for regulation to guide the responsible and ethical development of bio-hybrid robotics – a ground-breaking science which fuses artificial components with living tissue and cells.

In a paper published in Proceedings of the National Academy of Sciences [PNAS] a multidisciplinary team from the University of Southampton and universities in the US and Spain set out the unique ethical issues this technology presents and the need for proper governance.

Combining living materials and organisms with synthetic robotic components might sound like something out of science fiction, but this emerging field is advancing rapidly. Bio-hybrid robots using living muscles can crawl, swim, grip, pump, and sense their surroundings. Sensors made from sensory cells or insect antennae have improved chemical sensing. Living neurons have even been used to control mobile robots.

Dr Rafael Mestre from the University of Southampton, who specialises in emergent technologies and is co-lead author of the paper, said: “The challenges in overseeing bio-hybrid robotics are not dissimilar to those encountered in the regulation of biomedical devices, stem cells and other disruptive technologies. But unlike purely mechanical or digital technologies, bio-hybrid robots blend biological and synthetic components in unprecedented ways. This presents unique possible benefits but also potential dangers.”

Research publications relating to bio-hybrid robotics have increased continuously over the last decade. But the authors found that of the more than 1,500 publications on the subject at the time, only five considered its ethical implications in depth.

The paper’s authors identified three areas where bio-hybrid robotics present unique ethical issues: Interactivity – how bio-robots interact with humans and the environment, Integrability – how and whether humans might assimilate bio-robots (such as bio-robotic organs or limbs), and Moral status.

In a series of thought experiments, they describe how a bio-robot for cleaning our oceans could disrupt the food chain, how a bio-hybrid robotic arm might exacerbate inequalities [emphasis mine], and how increasing sophisticated bio-hybrid assistants could raise questions about sentience and moral value.

“Bio-hybrid robots create unique ethical dilemmas,” says Aníbal M. Astobiza, an ethicist from the University of the Basque Country in Spain and co-lead author of the paper. “The living tissue used in their fabrication, potential for sentience, distinct environmental impact, unusual moral status, and capacity for biological evolution or adaptation create unique ethical dilemmas that extend beyond those of wholly artificial or biological technologies.”

The paper is the first from the Biohybrid Futures project led by Dr Rafael Mestre, in collaboration with the Rebooting Democracy project. Biohybrid Futures is setting out to develop a framework for the responsible research, application, and governance of bio-hybrid robotics.

The paper proposes several requirements for such a framework, including risk assessments, consideration of social implications, and increasing public awareness and understanding.

Dr Matt Ryan, a political scientist from the University of Southampton and a co-author on the paper, said: “If debates around embryonic stem cells, human cloning or artificial intelligence have taught us something, it is that humans rarely agree on the correct resolution of the moral dilemmas of emergent technologies.

“Compared to related technologies such as embryonic stem cells or artificial intelligence, bio-hybrid robotics has developed relatively unattended by the media, the public and policymakers, but it is no less significant. We want the public to be included in this conversation to ensure a democratic approach to the development and ethical evaluation of this technology.”

In addition to the need for a governance framework, the authors set out actions that the research community can take now to guide their research.

“Taking these steps should not be seen as prescriptive in any way, but as an opportunity to share responsibility, taking a heavy weight away from the researcher’s shoulders,” says Dr Victoria Webster-Wood, a biomechanical engineer from Carnegie Mellon University in the US and co-author on the paper.

“Research in bio-hybrid robotics has evolved in various directions. We need to align our efforts to fully unlock its potential.”

Here’s a link to and a citation for the paper,

Ethics and responsibility in biohybrid robotics research by Rafael Mestre, Aníbal M. Astobiza, Victoria A. Webster-Wood, Matt Ryan, and M. Taher A. Saif. PNAS 121 (31) e2310458121 July 23, 2024 DOI: https://doi.org/10.1073/pnas.2310458121

This paper is open access.

Cyborg or biohybrid robot?

Earlier, I highlighted “… how a bio-hybrid robotic arm might exacerbate inequalities …” because it suggests cyborgs, which are not mentioned in the press release or in the paper, This seems like an odd omission but, over the years, terminology does change although it’s not clear that’s the situation here.

I have two ‘definitions’, the first is from an October 21, 2019 article by Javier Yanes for OpenMind BBVA, Note: More about BBVA later,

The fusion between living organisms and artificial devices has become familiar to us through the concept of the cyborg (cybernetic organism). This approach consists of restoring or improving the capacities of the organic being, usually a human being, by means of technological devices. On the other hand, biohybrid robots are in some ways the opposite idea: using living tissues or cells to provide the machine with functions that would be difficult to achieve otherwise. The idea is that if soft robots seek to achieve this through synthetic materials, why not do so directly with living materials?

In contrast, there’s this from “Biohybrid robots: recent progress, challenges, and perspectives,” Note 1: Full citation for paper follows excerpt; Note 2: Links have been removed,

2.3. Cyborgs

Another approach to building biohybrid robots is the artificial enhancement of animals or using an entire animal body as a scaffold to manipulate robotically. The locomotion of these augmented animals can then be externally controlled, spanning three modes of locomotion: walking/running, flying, and swimming. Notably, these capabilities have been demonstrated in jellyfish (figure 4(A)) [139, 140], clams (figure 4(B)) [141], turtles (figure 4(C)) [142, 143], and insects, including locusts (figure 4(D)) [27, 144], beetles (figure 4(E)) [28, 145–158], cockroaches (figure 4(F)) [159–165], and moths [166–170].

….

The advantages of using entire animals as cyborgs are multifold. For robotics, augmented animals possess inherent features that address some of the long-standing challenges within the field, including power consumption and damage tolerance, by taking advantage of animal metabolism [172], tissue healing, and other adaptive behaviors. In particular, biohybrid robotic jellyfish, composed of a self-contained microelectronic swim controller embedded into live Aurelia aurita moon jellyfish, consumed one to three orders of magnitude less power per mass than existing swimming robots [172], and cyborg insects can make use of the insect’s hemolymph directly as a fuel source [173].

So, sometimes there’s a distinction and sometimes there’s not. I take this to mean that the field is still emerging and that’s reflected in evolving terminology.

Here’s a link to and a citation for the paper,

Biohybrid robots: recent progress, challenges, and perspectives by Victoria A Webster-Wood, Maria Guix, Nicole W Xu, Bahareh Behkam, Hirotaka Sato, Deblina Sarkar, Samuel Sanchez, Masahiro Shimizu and Kevin Kit Parker. Bioinspiration & Biomimetics, Volume 18, Number 1 015001 DOI 10.1088/1748-3190/ac9c3b Published 8 November 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

This paper is open access.

A few notes about BBVA and other items

BBVA is Banco Bilbao Vizcaya Argentaria according to its Wikipedia entry, Note: Links have been removed,

Banco Bilbao Vizcaya Argentaria, S.A. (Spanish pronunciation: [ˈbaŋko βilˈβao βiθˈkaʝa aɾxenˈtaɾja]), better known by its initialism BBVA, is a Spanish multinational financial services company based in Madrid and Bilbao, Spain. It is one of the largest financial institutions in the world, and is present mainly in Spain, Portugal, Mexico, South America, Turkey, Italy and Romania.[2]

BBVA’s OpenMind is, from their About us page,

OpenMind: BBVA’s knowledge community

OpenMind is a non-profit project run by BBVA that aims to contribute to the generation and dissemination of knowledge about fundamental issues of our time, in an open and free way. The project is materialized in an online dissemination community.

Sharing knowledge for a better future.

At OpenMind we want to help people understand the main phenomena affecting our lives; the opportunities and challenges that we face in areas such as science, technology, humanities or economics. Analyzing the impact of scientific and technological advances on the future of the economy, society and our daily lives is the project’s main objective, which always starts on the premise that a broader and greater quality knowledge will help us to make better individual and collective decisions.

As for other items, you can find my latest (biorobotic, cyborg, or bionic depending what terminology you what to use) jellyfish story in this June 6, 2024 posting, the Biohybrid Futures project mentioned in the press release here, and also mentioned in the Rebooting Democracy project (unexpected in the context of an emerging science/technology) can be found here on this University of Southampton website.

Finally, you can find more on these stories (science/technology announcements and/or ethics research/issues) here by searching for ‘robots’ (tag and category), ‘cyborgs’ (tag), ‘machine/flesh’ (tag), ‘neuroprosthetic’ (tag), and human enhancement (category).

Protecting your data from Apple is very hard

There has been a lot of talk about Tim Cook (Chief Executive Officer of Apple Inc.) and his policy for data privacy at Apple and his push for better consumer data privacy. For example, there’s this, from a June 10, 2022 article by Kif Leswing for CNBC,

Key Points

  • Apple CEO Tim Cook said in a letter to Congress that lawmakers should advance privacy legislation that’s currently being debated “as soon as possible.”
  • The bill would give consumers protections and rights dealing with how their data is used online, and would require that companies minimize the amount of data they collect on their users.
  • Apple has long positioned itself as the most privacy-focused company among its tech peers.

Apple has long positioned itself as the most privacy-focused company among its tech peers, and Cook regularly addresses the issue in speeches and meetings. Apple says that its commitment to privacy is a deeply held value by its employees, and often invokes the phrase “privacy is a fundamental human right.”

It’s also strategic for Apple’s hardware business. Legislation that regulates how much data companies collect or how it’s processed plays into Apple’s current privacy features, and could even give Apple a head start against competitors that would need to rebuild their systems to comply with the law.

More recently with rising concerns regarding artificial intelligence (AI), Apple has rushed to assure customers that their data is still private, from a May 10, 2024 article by Kyle Orland for Ars Technica, Note: Links have been removed,

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

And publicly reviewable server code means experts can “verify this privacy promise.”

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC [Apple’s World Wide Developers Conference] keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.-

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details [emphasis mine] for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging [emphasis mine] as it wades into the generative AI space for the first time. We’ll see what security experts have to say [emphasis mine] when these servers and their code are made publicly available in the near future.

Orland’s caution/suspicion would seem warranted in light of some recent research from scientists in Finland. From an April 3, 2024 Aalto University press release (also on EurekAlert), Note: A link has been removed,

Privacy. That’s Apple,’ the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps; the ones that are pretty much unavoidable on a new device, be it a computer, tablet or mobile phone. The researchers will present their findings in mid-May at the prestigious CHI conference [ACM CHI Conference on Human Factors in Computing Systems, May 11, 2024 – May 16, 2024 in Honolulu, Hawaii], and the peer-reviewed research paper is already available online.

‘We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,’ says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers. [emphasis mine]

‘Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,’ says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

‘The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings – or even both,’ says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

‘It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,’ Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. ‘When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,’ Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data? [emphasis mine]

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalised user experiences, among other things. [emphasis mine]

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects [emphasis mine], as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple [emphasis mine].

So what can be learned from all this – are users ultimately facing an almost impossible task?

‘Unfortunately, that’s one lesson,’ says Lindqvist.

I have found two copies of the researchers’ paper. There’s a PDF version on Aalto University’s website that bears this caution,

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.

Here’s a link to and a citation for the official version of the paper,

Privacy of Default Apps in Apple’s Mobile Ecosystem by Amel Bourdoucen and Janne Lindqvist. CHI. ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 2024 Article No.: 786 Pages 1–32 DOI: https://doi.org/10.1145/3613904.3642831 Published:11 May 2024

This paper is open access.

Your gas stove may be emitting more polluting nanoparticles than your car exhaust

A February 27, 2024 news item on ScienceDaily describes the startling research results to anyone who’s listened to countless rhapsodize about the superiority of gas stoves over any other,

Cooking on your gas stove can emit more nano-sized particles into the air than vehicles that run on gas or diesel, possibly increasing your risk of developing asthma or other respiratory illnesses, a new Purdue University study has found.

“Combustion remains a source of air pollution across the world, both indoors and outdoors. We found that cooking on your gas stove produces large amounts of small nanoparticles that get into your respiratory system and deposit efficiently,” said Brandon Boor, an associate professor in Purdue’s Lyles School of Civil Engineering, who led this research.

Based on these findings, the researchers would encourage turning on a kitchen exhaust fan while cooking on a gas stove.

The study, published in the journal PNAS [Proceedngs of the National Academy of Sciences] Nexus, focused on tiny airborne nanoparticles that are only 1-3 nanometers in diameter, which is just the right size for reaching certain parts of the respiratory system and spreading to other organs.

A February 27, 2024 Purdue University news release by Kayla Albert (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

Recent studies have found that children who live in homes with gas stoves are more likely to develop asthma. But not much is known about how particles smaller than 3 nanometers, called nanocluster aerosol, grow and spread indoors because they’re very difficult to measure.

“These super tiny nanoparticles are so small that you’re not able to see them. They’re not like dust particles that you would see floating in the air,” Boor said. “After observing such high concentrations of nanocluster aerosol during gas cooking, we can’t ignore these nano-sized particles anymore.”

Using state-of-the-art air quality instrumentation provided by the German company GRIMM AEROSOL TECHNIK, a member of the DURAG GROUP, Purdue researchers were able to measure these tiny particles down to a single nanometer while cooking on a gas stove in a “tiny house” lab. They collaborated with Gerhard Steiner, a senior scientist and product manager for nano measurement at GRIMM AEROSOL. 

Called the Purdue zero Energy Design Guidance for Engineers (zEDGE) lab, the tiny house has all the features of a typical home but is equipped with sensors for closely monitoring the impact of everyday activities on a home’s air quality. With this testing environment and the instrument from GRIMM AEROSOL, a high-resolution particle size magnifier—scanning mobility particle sizer (PSMPS), the team collected extensive data on indoor nanocluster aerosol particles during realistic cooking experiments.

This magnitude of high-quality data allowed the researchers to compare their findings with known outdoor air pollution levels, which are more regulated and understood than indoor air pollution. They found that as many as 10 quadrillion nanocluster aerosol particles could be emitted per kilogram of cooking fuel — matching or exceeding those produced from vehicles with internal combustion engines. 

This would mean that adults and children could be breathing in 10-100 times more nanocluster aerosol from cooking on a gas stove indoors than they would from car exhaust while standing on a busy street.

“You would not use a diesel engine exhaust pipe as an air supply to your kitchen,” said Nusrat Jung, a Purdue assistant professor of civil engineering who designed the tiny house lab with her students and co-led this study.

Purdue civil engineering PhD student Satya Patra made these findings by looking at data collected in the tiny house lab and modeling the various ways that nanocluster aerosol could transform indoors and deposit into a person’s respiratory system.

The models showed that nanocluster aerosol particles are very persistent in their journey from the gas stove to the rest of the house. Trillions of these particles were emitted within just 20 minutes of boiling water or making grilled cheese sandwiches or buttermilk pancakes on a gas stove.

Even though many particles rapidly diffused to other surfaces, the models indicated that approximately 10 billion to 1 trillion particles could deposit into an adult’s head airways and tracheobronchial region of the lungs. These doses would be even higher for children — the smaller the human, the more concentrated the dose.

The nanocluster aerosol coming from the gas combustion also could easily mix with larger particles entering the air from butter, oil or whatever else is cooking on the gas stove, resulting in new particles with their own unique behaviors.

A gas stove’s exhaust fan would likely redirect these nanoparticles away from your respiratory system, but that remains to be tested.

“Since most people don’t turn on their exhaust fan while cooking, having kitchen hoods that activate automatically would be a logical solution,” Boor said. “Moving forward, we need to think about how to reduce our exposure to all types of indoor air pollutants. Based on our new data, we’d advise that nanocluster aerosol be considered as a distinct air pollutant category.”

This study was supported by a National Science Foundation CAREER award to Boor. Additional financial support was provided by the Alfred P. Sloan Foundation’s Chemistry of Indoor Environments program through an interdisciplinary collaboration with Philip Stevens, a professor in Indiana University’s Paul H. O’Neill School of Public and Environmental Affairs in Bloomington.

Here’s a link to and a citation for the paper,

Dynamics of nanocluster aerosol in the indoor atmosphere during gas cooking by Satya S Patra, Jinglin Jiang, Xiaosu Ding, Chunxu Huang, Emily K Reidy, Vinay Kumar, Paige Price, Connor Keech, Gerhard Steiner, Philip Stevens, Nusrat Jung, Brandon E Boor. PNAS Nexus, Volume 3, Issue 2, February 2024, pgae044, DOI: https://doi.org/10.1093/pnasnexus/pgae044 Published: 27 February 2024

This paper is open access.

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.