Category Archives: artificial intelligence (AI)

October 29, 2024 Woodrow Wilson Center event: 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability

An October 9, 2024 notice from the Wilson Center (or Woodrow Wilson Center or Woodrow Wilson International Center for Scholars received via email) announces an annual event, which this year will focus on AI (artificial intelligence),

The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability

Tuesday
Oct. 29, 2024
9:30am – 2:00pm ET
6th Floor Flom Auditorium, Woodrow Wilson Center

Time is running out to RSVP for the 2024 Canada-US Legal Symposium!

This year’s program will address artificial intelligence (AI) governance, regulation, and liability. High-profile advances in AI over the past four years have raised serious legal questions about the development, integration, and use of the technology. Canada and the United States, longtime leaders in innovation and hubs for some of the world’s top AI companies, are poised to lead in developing a model for responsible AI policy.

This event is co-organized with the Science, Technology, and Innovation Program and the Canada-US Law Institute.

The event page for The 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability gives you the option of an RSVP to attend the virtual or in-person event.

For more about international AI usage and regulation efforts, there’s the Wilson Center’s Science and Technology Innovation Program CTRL Forward blog. Here’s a sampling of some of the most recent postings, Note: CTRL Forward postings cover a wide range of science/technology topics often noting how the international scene is affected; it seems September saw a major focus on AI

For anyone curious about the current state of Canadian legislation and artificial intelligence, I have a May 1, 2023 posting which offers an overview of the current state of affairs, (Note: The bill has yet to be passed)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

The omnibus bill, C-27, which includes Artificial Intelligence and Data Act (AIDA) had passed its second reading in the House of Commons at the time of the posting. Since May 2023, the bill has been the subject of the House of Commons Standing Committee on Industry and Technology according to the Parliament of Canada’s LEGISinfo’s C-27 , 44th Parliament, 1st session Monday, November 22, 2021, to present: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts webpage.

You can find more up-to-date information about the status of the Committee’s Bill-27 meetings on this webpage where it appears that September 26, 2024 was the committee’s most recent meeting. If you click on the highlighted meeting dates, you will be given the option of watching a webcast of the meeting. The webpage will also give you access to a list of witnesses, the briefs and the briefs themselves.

Geoffrey Hinton (University of Toronto) shares 2024 Nobel Prize for Physics with John J. Hopfield (Princeton University)

What an interesting choice the committee deciding on the 2024 Nobel Prize for Physics have made. Geoffrey Hinton has been mentioned here a number of times, most recently for his participation in one of the periodic AI (artificial intelligence) panics that pop up from time to time. For more about the latest one and Hinton’s participation see my May 25, 2023 posting “Non-human authors (ChatGPT or others) of scientific and medical studies and the latest AI panic!!!” and scroll down to ‘The panic’ subhead.

I have almost nothing about John J. Hopfield other than a tangential mention of the Hopfield neural network in a January 3, 2018 posting “Mott memristor.”

An October 8, 2024 Royal Swedish Academy of Sciences press release announces the winners of the 2024 Nobel Prize in Physics,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to

John J. Hopfield
Princeton University, NJ, USA

Geoffrey E. Hinton
University of Toronto, Canada

“for foundational discoveries and inventions that enable machine learning with artificial neural networks”

They trained artificial neural networks using physics

This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.

When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through con­nections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.

John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.

Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.

“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.

An October 8, 2024 University of Toronto news release by Rahul Kalvapalle provides more detail about Hinton’s work and history with the university.

Ben Edwards wrote an October 8, 2024 article for Ars Technica, which in addition to reiterating the announcement explores a ‘controversial’ element to the story, Note 1: I gather I’m not the only one who found the award of a physics prize to researchers in the field of computer science a little unusual, Note 2: Links have been removed,

Hopfield and Hinton’s research, which dates back to the early 1980s, applied principles from physics to develop methods that underpin modern machine-learning techniques. Their work has enabled computers to perform tasks such as image recognition and pattern completion, capabilities that are now ubiquitous in everyday technology.

The win is already turning heads on social media because it seems unusual that research in a computer science field like machine learning might win a Nobel Prize for physics. “And the 2024 Nobel Prize in Physics does not go to physics…” tweeted German physicist Sabine Hossenfelder this morning [October 8, 2024].

From the Nobel committee’s point of view, the award largely derives from the fact that the two men drew from statistical models used in physics and partly from recognizing the advancements in physics research that came from using the men’s neural network techniques as research tools.

Nobel committee chair Ellen Moons, a physicist at Karlstad University, Sweden, said during the announcement, “Artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics.”

For a comprehensive overview of both Nobel prize winners, Hinton and Hopfield, their work, and their stands vis à vis the dangers of AI, there’s an October 8, 2024 Associated Press article on phys.org.

Light-based neural networks

It’s unusual to see the same headline used to highlight research from two different teams released in such proximity, February 2024 and July 2024, respectively. Both of these are neuromorphic (brainlike) computing stories.

February 2024: Neural networks made of light

The first team’s work is announced in a February 21, 2024 Friedrich Schiller University press release, Note: A link has been removed,

Researchers from the Leibniz Institute of Photonic Technology (Leibniz IPHT) and the Friedrich Schiller University in Jena, along with an international team, have developed a new technology that could significantly reduce the high energy demands of future AI systems. This innovation utilizes light for neuronal computing, inspired by the neural networks of the human brain. It promises not only more efficient data processing but also speeds many times faster than current methods, all while consuming considerably less energy. Published in the prestigious journal „Advanced Science,“ their work introduces new avenues for environmentally friendly AI applications, as well as advancements in computerless diagnostics and intelligent microscopy.

Artificial intelligence (AI) is pivotal in advancing biotechnology and medical procedures, ranging from cancer diagnostics to the creation of new antibiotics. However, the ecological footprint of large-scale AI systems is substantial. For instance, training extensive language models like ChatGPT-3 requires several gigawatt-hours of energy—enough to power an average nuclear power plant at full capacity for several hours.

Prof. Mario Chemnitz, new Junior Professor of Intelligent Photonic SystemsExternal link at Friedrich Schiller University Jena, and Dr Bennet Fischer from Leibniz IPHT in Jena, in collaboration with their international team, have devised an innovative method to develop potentially energy-efficient computing systems that forego the need for extensive electronic infrastructure. They harness the unique interactions of light waves within optical fibers to forge an advanced artificial learning system.

A single fiber instead of thousands of components

Unlike traditional systems that rely on computer chips containing thousands of electronic components, their system uses a single optical fiber. This fiber is capable of performing the tasks of various neural networks—at the speed of light. “We utilize a single optical fiber to mimic the computational power of numerous neural networks,“ Mario Chemnitz, who is also leader of the “Smart Photonics“ junior research group at Leibniz IPHT, explains. “By leveraging the unique physical properties of light, this system will enable the rapid and efficient processing of vast amounts of data in the future.

Delving into the mechanics reveals how information transmission occurs through the mixing of light frequencies: Data—whether pixel values from images or frequency components of an audio track—are encoded onto the color channels of ultrashort light pulses. These pulses carry the information through the fiber, undergoing various combinations, amplifications, or attenuations. The emergence of new color combinations at the fiber’s output enables the prediction of data types or contexts. For example, specific color channels can indicate visible objects in images or signs of illness in a voice.

A prime example of machine learning is identifying different numbers from thousands of handwritten characters. Mario Chemnitz, Bennet Fischer, and their colleagues from the Institut National de la Recherche Scientifique (INRS) in Québec utilized their technique to encode images of handwritten digits onto light signals and classify them via the optical fiber. The alteration in color composition at the fiber’s end forms a unique color spectrum—a „fingerprint“ for each digit. Following training, the system can analyze and recognize new handwriting digits with significantly reduced energy consumption.

System recognizes COVID-19 from voice samples

In simpler terms, pixel values are converted into varying intensities of primary colors—more red or less blue, for instance,“ Mario Chemnitz details. “Within the fiber, these primary colors blend to create the full spectrum of the rainbow. The shade of our mixed purple, for example, reveals much about the data processed by our system.“

The team has also successfully applied this method in a pilot study to diagnose COVID-19 infections using voice samples, achieving a detection rate that surpasses the best digital systems to date.

We are the first to demonstrate that such a vibrant interplay of light waves in optical fibers can directly classify complex information without any additional intelligent software,“ Mario Chemnitz states.

Since December 2023, Mario Chemnitz has held the position of Junior Professor of Intelligent Photonic Systems at Friedrich Schiller University Jena. Following his return from INRS in Canada in 2022, where he served as a postdoc, Chemnitz has been leading an international team at Leibniz IPHT in Jena. With Nexus funding support from the Carl Zeiss Foundation, their research focuses on exploring the potentials of non-linear optics. Their goal is to develop computer-free intelligent sensor systems and microscopes, as well as techniques for green computing.

Here’s a link to and a citation for the paper,

Neuromorphic Computing via Fission-based Broadband Frequency Generation by Bennet Fischer, Mario Chemnitz, Yi Zhu, Nicolas Perron, Piotr Roztocki, Benjamin MacLellan, Luigi Di Lauro, A. Aadhi, Cristina Rimoldi, Tiago H. Falk, Roberto Morandotti. Advanced Science Volume 10, Issue 35 December 15, 2023 2303835 DOI: https://doi.org/10.1002/advs.202303835. First published: 02 October 2023

This paper is open access.

July 2024: Neural networks made of light

A July 12, 2024 news item on ScienceDaily announces research from another German team,

Scientists propose a new way of implementing a neural network with an optical system which could make machine learning more sustainable in the future. The researchers at the Max Planck Institute for the Science of Light have published their new method in Nature Physics, demonstrating a method much simpler than previous approaches.

A July 12, 2024 Max Planck Institute for the Science of Light press release (also on EurekAlert), which originated the news item, provides more detail about their approach to neuromorphic computiing,

Machine learning and artificial intelligence are becoming increasingly widespread with applications ranging from computer vision to text generation, as demonstrated by ChatGPT. However, these complex tasks require increasingly complex neural networks; some with many billion parameters. This rapid growth of neural network size has put the technologies on an unsustainable path due to their exponentially growing energy consumption and training times. For instance, it is estimated that training GPT-3 consumed more than 1,000 MWh of energy, which amounts to the daily electrical energy consumption of a small town. This trend has created a need for faster, more energy- and cost-efficient alternatives, sparking the rapidly developing field of neuromorphic computing. The aim of this field is to replace the neural networks on our digital computers with physical neural networks. These are engineered to perform the required mathematical operations physically in a potentially faster and more energy-efficient way.

Optics and photonics are particularly promising platforms for neuromorphic computing since energy consumption can be kept to a minimum. Computations can be performed in parallel at very high speeds only limited by the speed of light. However, so far, there have been two significant challenges: Firstly, realizing the necessary complex mathematical computations requires high laser powers. Secondly, the lack of an efficient general training method for such physical neural networks.

Both challenges can be overcome with the new method proposed by Clara Wanjura and Florian Marquardt from the Max Planck Institute for the Science of Light in their new article in Nature Physics. “Normally, the data input is imprinted on the light field. However, in our new methods we propose to imprint the input by changing the light transmission,” explains Florian Marquardt, Director at the Institute. In this way, the input signal can be processed in an arbitrary fashion. This is true even though the light field itself behaves in the simplest way possible in which waves interfere without otherwise influencing each other. Therefore, their approach allows one to avoid complicated physical interactions to realize the required mathematical functions which would otherwise require high-power light fields. Evaluating and training this physical neural network would then become very straightforward: “It would really be as simple as sending light through the system and observing the transmitted light. This lets us evaluate the output of the network. At the same time, this allows one to measure all relevant information for the training”, says Clara Wanjura, the first author of the study. The authors demonstrated in simulations that their approach can be used to perform image classification tasks with the same accuracy as digital neural networks.

In the future, the authors are planning to collaborate with experimental groups to explore the implementation of their method. Since their proposal significantly relaxes the experimental requirements, it can be applied to many physically very different systems. This opens up new possibilities for neuromorphic devices allowing physical training over a broad range of platforms.

Here’s a link to and a citation for the paper,

Fully nonlinear neuromorphic computing with linear wave scattering by Clara C. Wanjura & Florian Marquardt. Nature Physics (2024) DOI: https://doi.org/10.1038/s41567-024-02534-9 Published: 09 July 2024

This paper is open access.

Highlights from Simon Fraser University’s (SFU) July 2024 Metacreation Lab newsletter

There’s some exciting news for people interested in Ars Electronica (see more below the newsletter excerpt) and for people who’d like to explore some of the same work from the Metacreation Lab in a locale that may be closer to their homes, there’s an exhibition on Saltspring Island, British Columbia. Here are details from SFU’s Metacreation Lab newsletter, which hit my mailbox on July 22, 2024,

Metacreation Lab at Ars Electronica 2024

We are delighted to announce that the Metacreation Lab for Creative AI will be part of the prestigious Ars Electronica Festival. This year’s festival, titled “HOPE – who will turn the tide,” will take place in Linz [Austria’ from September 4 to 8.[2024]

Representing the School of Interactive Arts and Technology (SIAT), we will showcase four innovative artworks. “Longing + Forgetting” by Philippe Pasquier, Matt Gingold, and Thecla Schiphorst explores pathfinding algorithms as metaphors for our personal and collective searches for solutions. “Autolume Mzton” by Jonas Kraasch and Philippe Pasquier examines the concept of birth through audio-reactive generative visuals. “Dreamscape” [emphasis mine] by Erica Lapadat-Janzen and Philippe Pasquier utilizes the Autolume system to train AI models with the artist’s own works, creating unique stills and video loops. “Ensemble” by Arshia Sobhan and Philippe Pasquier melds traditional Persian calligraphy with AI to create dynamic calligraphic forms.

We look forward to seeing you there!

More Information

MMM4Live Official Release; Generative MIDI in Ableton Live

We are ecstatic to release our Ableton plugin for computer-assisted music composition! Meet MMM4Live, our flexible and generic multi-track music AI generator. MMM4Live embeds our state-of-the-art music transformer model that allows generating fitting original musical patterns in any style! When generating, the AI model considers the request parameters, your instrument choice, and the existing musical MIDI content within your Ableton Live project to deliver relevant material. With this infilling approach, your music is the prompt!

We, at the Metacreation Lab for Creative AI at Simon Fraser University (SFU), are excited about democratizing and pushing the boundaries of musical creativity through academic research and serving diverse communities of creatives.

For additional inquiries, please do not hesitate to reach out to pasquier@sfu.ca

Try it out!

“Dreamscape” at the Provocation Exhibition

We are excited to announce that “Dreamscape,” a collaboration between Erica Lapadat-Janzen and Philippe Pasquier, will be exhibited at the Provocation exhibition from July 6th to August 10th, 2024.

In response to AI-generated art based on big data, the Metacreation Lab developed Autolume, a no-coding environment that allows artists to train AI models using their chosen works. For “Dreamscape,” the Metacreation Lab collaborated with Vancouver-based visual artist Erica Lapadat-Janzen. Using Autolume, they hand-picked and treated 12 stills and 9 video loops, capturing her unique aesthetic. Lapadat-Janzen’s media artworks, performances, and installations draw viewers into a world of equilibrium, where moments punctuate daily events to clarify our existence and find poetic meaning.

Provocation exhibition brings artists and audiences together to celebrate and provoke conversations about contemporary living. The exhibition is at 215 Baker Rd, Salt Spring Island, BC, and is open to the public (free admission) every Saturday and Sunday from 12-4 pm.

More Information

Ars Electronica

It is both an institute and a festival, from the Ars Electronica Wikipedia entry, Note: Links have been removed,

Ars Electronica Linz GmbH is an Austrian cultural, educational and scientific institute active in the field of new media art, founded in Linz in 1979. It is based at the Ars Electronica Center (AEC), which houses the Museum of the Future, in the city of Linz. Ars Electronica’s activities focus on the interlinkages between art, technology and society. It runs an annual festival, and manages a multidisciplinary media arts R&D facility known as the Futurelab. It also confers the Prix Ars Electronica awards.

Ars Electronica began with its first festival in September 1979. …

The 2024 festival, as noted earlier, has the theme of ‘Hope’, from the Ars Electronica 2024 festival theme page,

HOPE

Optimism is not the belief that things will somehow work out, but rather the confidence in our ability to influence and bring about improvement. And that perhaps best describes the essence of the principle of hope, not as a passive position, but as an active force that motivates us to keep going despite adversity.

But don’t worry, this year’s festival will not be an examination of the psychological or even evolutionary foundations of the principle of hope, nor will it be a reflection on our unsteady fluctuation between hope and pessimism.

“HOPE” as a festival theme is not a resigned statement that all we can do is hope that someone or something will solve our problems, but rather a manifestation that there are actually many reasons for hope. This is expressed in the subtitle “who will turn the tide”, which does not claim to know how the turnaround can be achieved, but rather focuses on who the driving forces behind this turnabout are.

The festival’s goal is to spotlight as many people as possible who have already set out on their journey and whose activities—no matter how big or small—are a very concrete reason to have hope.

Believing in the possibility of change is the prerequisite for bringing about positive change, especially when all signs point to the fact that the paths we are currently taking are often dead ends.

But belief alone will not be enough; it requires a combination of belief, vision, cooperation, and a willingness to take concrete action. A willingness that we need, even if we are not yet sure how we will turn the tide, how we will solve the problems, and how we will deal with the effects of the problems that we are (no longer) able to solve.

Earlier, I highlighted ‘Dreamscape’ which can be seen at Ars Electronica 2024 or at the “Provocation” exhibition on Salt Spring Island. Hopefully, you have an opportunity to visit one of the locations. As for the Metacreation Lab for Creative AI, you can find out more here.

Protecting your data from Apple is very hard

There has been a lot of talk about Tim Cook (Chief Executive Officer of Apple Inc.) and his policy for data privacy at Apple and his push for better consumer data privacy. For example, there’s this, from a June 10, 2022 article by Kif Leswing for CNBC,

Key Points

  • Apple CEO Tim Cook said in a letter to Congress that lawmakers should advance privacy legislation that’s currently being debated “as soon as possible.”
  • The bill would give consumers protections and rights dealing with how their data is used online, and would require that companies minimize the amount of data they collect on their users.
  • Apple has long positioned itself as the most privacy-focused company among its tech peers.

Apple has long positioned itself as the most privacy-focused company among its tech peers, and Cook regularly addresses the issue in speeches and meetings. Apple says that its commitment to privacy is a deeply held value by its employees, and often invokes the phrase “privacy is a fundamental human right.”

It’s also strategic for Apple’s hardware business. Legislation that regulates how much data companies collect or how it’s processed plays into Apple’s current privacy features, and could even give Apple a head start against competitors that would need to rebuild their systems to comply with the law.

More recently with rising concerns regarding artificial intelligence (AI), Apple has rushed to assure customers that their data is still private, from a May 10, 2024 article by Kyle Orland for Ars Technica, Note: Links have been removed,

Apple’s AI promise: “Your data is never stored or made accessible to Apple”

And publicly reviewable server code means experts can “verify this privacy promise.”

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC [Apple’s World Wide Developers Conference] keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.-

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details [emphasis mine] for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging [emphasis mine] as it wades into the generative AI space for the first time. We’ll see what security experts have to say [emphasis mine] when these servers and their code are made publicly available in the near future.

Orland’s caution/suspicion would seem warranted in light of some recent research from scientists in Finland. From an April 3, 2024 Aalto University press release (also on EurekAlert), Note: A link has been removed,

Privacy. That’s Apple,’ the slogan proclaims. New research from Aalto University begs to differ.

Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps; the ones that are pretty much unavoidable on a new device, be it a computer, tablet or mobile phone. The researchers will present their findings in mid-May at the prestigious CHI conference [ACM CHI Conference on Human Factors in Computing Systems, May 11, 2024 – May 16, 2024 in Honolulu, Hawaii], and the peer-reviewed research paper is already available online.

‘We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,’ says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.

The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.

The fragility of the privacy protections surprised even the researchers. [emphasis mine]

‘Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,’ says Lindqvist.

Participants weren’t able to stop data sharing in any of the apps

In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.

‘The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings – or even both,’ says Amel Bourdoucen, a doctoral researcher at Aalto.

In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.

The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.

‘It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,’ Bourdoucen says.

Finding and adjusting privacy settings also took a lot of time. ‘When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,’ Bourdoucen says.

In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.

Running out of options

If preventing data sharing is difficult, what does Apple do with all that data? [emphasis mine]

It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalised user experiences, among other things. [emphasis mine]

Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.

For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.

Lindqvist can’t comment directly on how Google’s Android works in similar respects [emphasis mine], as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple [emphasis mine].

So what can be learned from all this – are users ultimately facing an almost impossible task?

‘Unfortunately, that’s one lesson,’ says Lindqvist.

I have found two copies of the researchers’ paper. There’s a PDF version on Aalto University’s website that bears this caution,

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.

Here’s a link to and a citation for the official version of the paper,

Privacy of Default Apps in Apple’s Mobile Ecosystem by Amel Bourdoucen and Janne Lindqvist. CHI. ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 2024 Article No.: 786 Pages 1–32 DOI: https://doi.org/10.1145/3613904.3642831 Published:11 May 2024

This paper is open access.

Highlights from Simon Fraser University’s (SFU) June 2024 Metacreation Lab newsletter

The latest newsletter from the Metacreation Lab for Creative AI (at Simon Fraser University [SFU]), features a ‘first’. From the June 2024 Metacreation Lab newsletter (received via email),

“Longing + Forgetting” at the 2024 Currents New Media Festival in Santa Fe

We are thrilled to announce that Longing + Forgetting has been invited to the esteemed Currents New Media Festival in Santa Fe, New Mexico. Longing + Forgetting is a generative audio-video installation that explores the relationship between humans and machines. This media art project, created by Canadian artists Philippe Pasquier and Thecla Schiphorst alongside Australian artist Matt Gingold, has garnered international acclaim since its inception. Initially presented in Canada in 2013, the piece has journeyed through multiple international festivals, captivating audiences with its exploration of human expression through movement.

Philippe Pasquier will be on-site for the festival, overseeing the site-specific installation at El Museo Cultural de Santa Fe. This marks the North American premiere of the redeveloped version of “Longing + Forgetting,” featuring a new soundtrack by Pasquier based solely on the close-mic recording of dancers.

Currents New Media Festival runs June 14–23, 2024 and brings together the work of established and emerging new media artists from around the world across various disciplines, with an expected 9,000 visitors during the festival’s run.

More Information

Discover “Longing + Forgetting” at Bunjil Place in Melbourne

We are excited to announce that “Longing + Forgetting” is being featured at Bunjil Place in Melbourne, Australia. As part of the Art After Dark Program curated by Angela Barnett, this outdoor screening will run from June 1 to June 28, illuminating the night from 5 pm to 7 pm.

More Information

Presenting “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with GANs” at SIGGRAPH 2024

We are pleased to share that our paper, “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with Generative Adversarial Networks,” will be presented at SIGGRAPH 2024, the premier conference on computer graphics and interactive techniques. The event will take place from July 28 to August 1, 2024, in Denver, Colorado.

This paper delves into the artistic potential of Generative Adversarial Networks (GANs) to create and innovate within the realm of calligraphic Arabic script, particularly the nastaliq style. By developing two custom datasets and leveraging the StyleGAN2-ada architecture, we have generated high-quality, stylistically coherent calligraphic samples. Our work bridges the gap between traditional calligraphy and modern technology and offers a new mode of creative expression for this artform.

SIGGRAPH’24

For those unfamiliar with the acronym, SIGGRAPH stands for special interest group for computer graphics and interactive techniques. SIGGRAPH is huge and it’s a special interest group (SIG) of the ACM (Association for Computing Machinery).

If memory serves, this is the first time I’ve seen the Metacreation Lab make a request for volunteers, from the June 2024 Metacreation Lab newsletter,

Are you interested in music-making and AI technology?

The Metacreation Lab for Creative AI at Simon Fraser University (SFU), is conducting a research study in partnership with Steinberg Media Technologies GmbH. We are testing and evaluating MMM-Cubase v2, a creative AI system for assisting composing music. The system is based on our best music transformer, the multitrack music machine (MMM), which can generate, re-generate or complete new musical content based on existing content.

There is no prerequisite for this study beyond a basic knowledge of DAW and MIDI. So everyone is welcome even if you do not consider yourself a composer, but are interested in trying the system. The entire study should take you around 3 hours, and you must be 19+ years old. Basic interest and familiarity with digital music composition will help, but no experience with making music is required.

We seek to better evaluate the potential for adoption of such systems for novice/beginner as well as for seasoned composers. More specifically, you will be asked to install and use the system to compose a short 4-track musical composition and to fill out a survey questionnaire at the end.

Participation in this study is rewarded with one free Steinberg software license of your choice among Cubase Element, Dorico Element or Wavelab Element.

For any question or further inquiry, please contact researcher Renaud Bougueng Tchemeube directly at rbouguen@sfu.ca.

Enroll in the Study

You can find the Metacreation Lab for Creative AI website here.

Graphene-like materials for first smart contact lenses with AR (augmented reality) vision, health monitoring, & content surfing?

A March 6, 2024 XPANCEO news release on EurekAlert (also posted March 11, 2024 on the Graphene Council blog) and distributed by Mindset Consulting announced smart contact lenses devised with graphene-like materials,

XPANCEO, a deep tech company developing the first smart contact lenses with XR vision, health monitoring, and content surfing features, in collaboration with the Nobel laureate Konstantin S. Novoselov (National University of Singapore, University of Manchester) and professor Luis Martin-Moreno (Instituto de Nanociencia y Materiales de Aragon), has announced in Nature Communications a groundbreaking discovery of new properties of rhenium diselenide and rhenium disulfide, enabling novel mode of light-matter interaction with huge potential for integrated photonics, healthcare, and AR. Rhenium disulfide and rhenium diselenide are layered materials belonging to the family of graphene-like materials. Absorption and refraction in these materials have different principal directions, implying six degrees of freedom instead of a maximum of three in classical materials. As a result, rhenium disulfide and rhenium diselenide by themselves allow controlling the light propagation direction without any technological steps required for traditional materials like silicon and titanium dioxide.

The origin of such surprising light-matter interaction of ReS2 and ReSe2 with light is due to the specific symmetry breaking observed in these materials. Symmetry plays a huge role in nature, human life, and material science. For example, almost all living things are built symmetrically. Therefore, in ancient times symmetry was also called harmony, as it was associated with beauty. Physical laws are also closely related to symmetry, such as the laws of conservation of energy and momentum. Violation of symmetry leads to the appearance of new physical effects and radical changes in the properties of materials. In particular, the water-ice phase transition is a consequence of a decrease in the degree of symmetry. In the case of ReS2 and ReSe2, the crystal lattice has the lowest possible degree of symmetry, which leads to the rotation of optical axes – directions of symmetry of optical properties of the material, which was previously observed only for organic materials. As a result, these materials make possible to control the direction of light by changing the wavelength, which opens a unique way for light manipulation in next-generation devices and applications. 

“The discovery of unique properties in anisotropic materials is revolutionizing the fields of nanophotonics and optoelectronics, presenting exciting possibilities. These materials serve as a versatile platform for the advancement of optical devices, such as wavelength-switchable metamaterials, metasurfaces, and waveguides. Among the promising applications is the development of highly efficient biochemical sensors. These sensors have the potential to outperform existing analogs in terms of both sensitivity and cost efficiency. For example, they are anticipated to significantly reduce the expenses associated with hospital blood testing equipment, which is currently quite costly, potentially by several orders of magnitude. This will also allow the detection of dangerous diseases and viruses, such as cancer or COVID, at earlier stages,” says Dr. Valentyn S. Volkov, co-founder and scientific partner at XPANCEO, a scientist with an h-Index of 38 and over 8000 citations in leading international publications.

Beyond the healthcare industry, these novel properties of graphene-like materials can find applications in artificial intelligence and machine learning, facilitating the development of photonic circuits to create a fast and powerful computer suitable for machine learning tasks. A computer based on photonic circuits is a superior solution, transmitting more information per unit of time, and unlike electric currents, photons (light beams) flow across one another without interacting. Furthermore, the new material properties can be utilized in producing smart optics, such as contact lenses or glasses, specifically for advancing AR [augmented reality] features. Leveraging these properties will enhance image coloration and adapt images for individuals with impaired color perception, enabling them to see the full spectrum of colors.

Here’s a link to and a citation for the paper,

Wandering principal optical axes in van der Waals triclinic materials by Georgy A. Ermolaev, Kirill V. Voronin, Adilet N. Toksumakov, Dmitriy V. Grudinin, Ilia M. Fradkin, Arslan Mazitov, Aleksandr S. Slavich, Mikhail K. Tatmyshevskiy, Dmitry I. Yakubovsky, Valentin R. Solovey, Roman V. Kirtaev, Sergey M. Novikov, Elena S. Zhukova, Ivan Kruglov, Andrey A. Vyshnevyy, Denis G. Baranov, Davit A. Ghazaryan, Aleksey V. Arsenin, Luis Martin-Moreno, Valentyn S. Volkov & Kostya S. Novoselov. Nature Communications volume 15, Article number: 1552 (2024) DOI: https://doi.org/10.1038/s41467-024-45266-3 Published: 06 March 2024

This paper is open access.

Six months after the first one at Bletchley Park, the 2nd AI Safety Summit (May 21-22, 2024) convenes in Korea

This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),

Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 

Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies. 

Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”

World’s response not on track in face of potentially rapid AI progress; 

According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts. 

Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.

World-leading AI experts issue call to action

In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.

Urgent priorities for AI governance

The authors recommend governments to:

  • establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
  • mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
  • require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
  • implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.

According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.

AI impacts could be catastrophic

AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.

Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”

Notable co-authors:

  • The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
  • China’s first Turing Award winner (Andrew Yao).
  • The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
  • One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
  • A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
  • Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
  • Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.

Additional quotes from the authors:

Philip Torr, Professor in AI, University of Oxford:

  • I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.

Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:

  •  “Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”

Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:

  • “In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”

Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:

  • “Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
  • “The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”

Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:

 “AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”

  • “This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”  

Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:

  • AI is software. Its reach is global and its governance needs to be as well.
  • Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
  • Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.

Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:

  • To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.

Here’s a link to and a citation for the paper,

Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117

This paper appears to be open access.

For anyone who’s curious about the buildup to these safety summits, I have more in my October 18, 2023 “AI safety talks at Bletchley Park in November 2023” posting, which features excerpts from a number of articles on AI safety. There’s also my November 2, 2023 , “UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes” posting, which offers excerpts from articles critiquing the AI safety summit.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.