This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
“AI at the service of society” is the guiding theme of the 34th International Joint Conference on Artificial Intelligence (IJCAI), taking place from August 16 to 22, 2025 in Montreal, Canada. Since its inception in 1969, IJCAI has played a pivotal role as a forum to showcase the frontiers of artificial intelligence research and applications and thus represents the oldest continuously running conference on artificial intelligence.
In 2025, the conference with more than 2000 attendees, has been brought to Canada by Gilles Pesant, the Local Arrangements Committee Chair, Professor in the Department of Computer and Software Engineering at Polytechnique Montréal and IVADO [Institut de valorisation des données] researcher. “What makes IJCAI special is that it brings together the latest research from many different areas of artificial intelligence. It’s a great opportunity for the Canadian AI community to showcase the world-class contributions and outstanding talent,` says the founder of the Quosséça research lab (QUebec Optimization and Satisfaction Strategies Exploiting Constraint Algorithms) and current President of the Association for Constraint Programming. Prof. Pesant is known for developing advanced algorithms for complex scheduling and planning problems. Among his current research interests are neuro-symbolic AI systems which combine machine learning and constraint programming.
Canada’s AI Leadership
This year marks the 30th anniversary of a breakthrough that transformed artificial intelligence by giving machines the ability to learn from and remember sequences such as speech, language, and time-series data – Long Short-Term Memory (LSTM) architecture. While not developed in Canada, the story of LSTM is intertwined with Canada’s leadership in artificial intelligence. During the “AI winter,” when much of the world abandoned neural networks, Canada became a refuge for pioneering AI research. Visionaries like Geoffrey Hinton, now a Nobel Prize winner, and Yoshua Bengio, among others, continued to advance deep learning despite widespread skepticism. Their perseverance and the resilience of the Canadian research community laid the foundation for the AI revolution that is transforming the world today. Canada continues to lead through such institutions as MILA, Vector Institute, AMII, IVADO, and the Canadian AI Safety Institute.
The IJCAI 2025 program features a lineup of internationally recognised keynote speakers, covering the full spectrum of AI research, including:
Yoshua Bengio, a pioneer in representation learning and one of the godfathers of deep learning. He is a recipient of the 2018 Turing Award—often called the “Nobel Prize of Computing”—which he shares with Geoffrey Hinton and Yann LeCun for demonstrating how deep learning models can scale effectively with large datasets and computational power. Bengio is a professor at the Université de Montréal and the founder of Mila – Quebec AI Institute, one of the world’s largest academic labs dedicated to deep learning, which has helped establish Montreal as a global hub for AI research.
Every time someone uses a search engine or an AI-powered chatbot, they benefit from technologies that bridge the gap between human language and machine understanding — a challenge directly addressed by Heng Ji’s research. An invited IJCAI speaker, Ji is a professor at the University of Illinois Urbana-Champaign, renowned for her pioneering work on how AI systems extract and distill knowledge from vast amounts of unstructured data. Far from being confined to academia, she is also an active voice in AI policy, contributing her expertise to discussions on the ethical and responsible development of AI.
Luc De Raedt, professor of computer science at KU Leuven and director of Leuven.AI, is widely recognized for his pioneering contributions to integrating machine learning with symbolic reasoning. Beyond his research, he has played a significant leadership role in fostering public dialogue on responsible AI, spearheading initiatives and organizing debates on the societal impacts of AI to help shape conversations around ethical and trustworthy AI development. In his IJCAI2025 kenyote address he will talk about ‘Neurosymbolic AI : combining Data and Knowledge’.
In this effort, he is not alone. Bernhard Schölkopf, director at the Max Planck Institute for Intelligent Systems and co-founder of ELLIS (European Laboratory for Learning and Intelligent Systems), is another leading figure giving an invited talk on ‘From ML for science to causal digital twins’. In addition to his scientific contributions — particularly in kernel methods and causal inference — Schölkopf is a prominent advocate for ethical and trustworthy AI in Europe. He plays a key role in shaping AI research agendas and informing policy discussions around responsible AI.
The Montreal program also features invited talks by IJCAI 2025 awardees: Aditya Grover (UCLA and Inception Labs), recipient of the IJCAI-25 Computers and Thought Award; Rina Dechter (University of California, Irvine), recipient of the IJCAI-25 Award for Research Excellence; and Cynthia Rudin (Duke Univeristy), recipient of the IJCAI-25 John McCarthy Award.
The IJCAI 2025 scientific program highlights how AI is shaping both cutting-edge research and real-world impact. The AI, Arts & Creativity track explores AI’s growing role in generating and supporting creative work—from music and design to storytelling and architecture. The Human-Centred AI track addresses the challenges of building AI systems aligned with human values, integrating technical, cognitive, ethical, and societal perspectives. The AI for Social Good track focuses on AI-driven solutions for pressing global challenges, encouraging collaborations with governments, NGOs, and researchers to support initiatives like the UN Sustainable Development Goals. Meanwhile, the AI4Tech track showcases how AI is driving breakthroughs in critical technologies across sectors such as health, finance, mobility, and smart cities. Complementing these thematic tracks, IJCAI 2025 includes as well a set of impactful competitions and challenges to push the boundaries of applied AI, including the Challenge on Deepfake Detection and Localization, the AI for Drinking Water Chlorination Challenge, and the Pulmonary Fibrosis Segmentation Challenge. Together, these elements reflect the pulse of AI today—advancing science while addressing the needs of society. IJCAI 2025 also presents an AI Art Gallery featuring works that examine how machines balance agency and vulnerability, and how their interactions with humans and the environment shape future possibilities. These artworks engage with these questions through AI, robotics, AR, VR, and other emerging technologies.
The program also includes the AI Lounge: Between Wonder and Caution – Insights from Three Experts, an admission-free public discussion featuring science communication journalist in debate with three community representatives: Heng Ji (University of Illinois Urbana-Champaign), Kate Larson (University of Waterloo), and Cynthia Rudin (Duke University).
To support authors who may experience difficulties obtaining Canadian visas, a satellite event will be hosted in Guangzhou, China, from August 29 to August 31, 2025.
The IJCAI 2025 conference is supported by its sponsors, including the Artificial Intelligence Journal (AIJ) and Palais des Congrès de Montréal (Diamond Sponsor), GMI Cloud, FinVolution Group, and Baidu and Ant Research as Silver Sponsors.
The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),
French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.
Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”
He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.
Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said.
The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.
…
Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.
…
“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.
“He nodded, and noted it, but it wasn’t a longer exchange than that.”
…
Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].
…
Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,
JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.
The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.
The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.
Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”
“We are most of the time too slow,” he said.
The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.
…
Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.
“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.
Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.
…
Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.
Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”
But the U.S.-China rivalry overshadowed broader international talks.
…
The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk
A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.
Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.
She also said tensions between the US and China were impeding the ability to develop AI safely.
…
… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.
…
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks.
Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.
The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.
Towards a common understanding
The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics.
In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal.
As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.
The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.
Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably.
Three distinct categories of AI risks are identified:
Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons;
System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems;
Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.
The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace.
While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.
Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future.
“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.
“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.”
There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.
This May 20, 2024 University of Oxford press release (also on EurekAlert) was under embargo until almost noon on May 20, 2024, which is a bit unusual, in my experience, (Note: I have more about the 1st summit and the interest in AI safety at the end of this posting),
Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.
Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May [2024]) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.
Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”
World’s response not on track in face of potentially rapid AI progress;
According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems—outperforming human abilities across many critical domains—will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.
Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.
World-leading AI experts issue call to action
In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.
This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.
Urgent priorities for AI governance
The authors recommend governments to:
establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion.
mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations.
require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers.
implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.
According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.
AI impacts could be catastrophic
AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.
Stuart Russell OBE [Order of the British Empire], Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations—that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”
…
Notable co-authors:
The world’s most-cited computer scientist (Prof. Hinton), and the most-cited scholar in AI security and privacy (Prof. Dawn Song)
China’s first Turing Award winner (Andrew Yao).
The authors of the standard textbook on artificial intelligence (Prof. Stuart Russell) and machine learning theory (Prof. Shai Shalev-Schwartz)
One of the world’s most influential public intellectuals (Prof. Yuval Noah Harari)
A Nobel Laureate in economics, the world’s most-cited economist (Prof. Daniel Kahneman)
Department-leading AI legal scholars and social scientists (Lan Xue, Qiqi Gao, and Gillian Hadfield).
Some of the world’s most renowned AI researchers from subfields such as reinforcement learning (Pieter Abbeel, Jeff Clune, Anca Dragan), AI security and privacy (Dawn Song), AI vision (Trevor Darrell, Phil Torr, Ya-Qin Zhang), automated machine learning (Frank Hutter), and several researchers in AI safety.
Additional quotes from the authors:
Philip Torr, Professor in AI, University of Oxford:
“I believe if we tread carefully the benefits of AI will outweigh the downsides, but for me one of the biggest immediate risks from AI is that we develop the ability to rapidly process data and control society, by government and industry. We could risk slipping into some Orwellian future with some form of totalitarian state having complete control.“
Dawn Song: Professor in AI at UC Berkeley, most-cited researcher in AI security and privacy:
“Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind. It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe”
Yuval Noah Harari, Professor of history at Hebrew University of Jerusalem, best-selling author of ‘Sapiens’ and ‘Homo Deus’, world leading public intellectual:
“In developing AI, humanity is creating something more powerful than itself, that may escape our control and endanger the survival of our species. Instead of uniting against this shared threat, we humans are fighting among ourselves. Humankind seems hell-bent on self-destruction. We pride ourselves on being the smartest animals on the planet. It seems then that evolution is switching from survival of the fittest, to extinction of the smartest.”
Jeff Clune, Professor in AI at University of British Columbia and one of the leading researchers in reinforcement learning:
“Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different. We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”
“The risks we describe are not necessarily long-term risks. AI is progressing extremely rapidly. Even just with current trends, it is difficult to predict how capable it will be in 2-3 years. But what very few realize is that AI is already dramatically speeding up AI development. What happens if there is a breakthrough for how to create a rapidly self-improving AI system? We are now in an era where that could happen any month. Moreover, the odds of that being possible go up each month as AI improves and as the resources we invest in improving AI continue to exponentially increase.”
Gillian Hadfield, CIFAR AI Chair and Director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto:
“AI labs need to walk the walk when it comes to safety. But they’re spending far less on safety than they spend on creating more capable AI systems. Spending one-third on ensuring safety and ethical use should be the minimum.”
“This technology is powerful, and we’ve seen it is becoming more powerful, fast. What is powerful is dangerous, unless it is controlled. That is why we call on major tech companies and public funders to allocate at least one-third of their AI R&D budget to safety and ethical use, comparable to their funding for AI capabilities.”
Sheila McIlrath, Professor in AI, University of Toronto, Vector Institute:
AI is software. Its reach is global and its governance needs to be as well.
Just as we’ve done with nuclear power, aviation, and with biological and nuclear weaponry, countries must establish agreements that restrict development and use of AI, and that enforce information sharing to monitor compliance. Countries must unite for the greater good of humanity.
Now is the time to act, before AI is integrated into our critical infrastructure. We need to protect and preserve the institutions that serve as the foundation of modern society.
Frank Hutter, Professor in AI at the University of Freiburg, Head of the ELLIS Unit Freiburg, 3x ERC grantee:
To be clear: we need more research on AI, not less. But we need to focus our efforts on making this technology safe. For industry, the right type of regulation will provide economic incentives to shift resources from making the most capable systems yet more powerful to making them safer. For academia, we need more public funding for trustworthy AI and maintain a low barrier to entry for research on less capable open-source AI systems. This is the most important research challenge of our time, and the right mechanism design will focus the community at large to work towards the right breakthroughs.
…
Here’s a link to and a citation for the paper,
Managing extreme AI risks amid rapid progress; Preparation requires technical research and development, as well as adaptive, proactive governance by Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Trevor Darrell, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner, and Sören Mindermann. Science 20 May 2024 First Release DOI: 10.1126/science.adn0117
Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).
A very software approach?
This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,
In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.
…
The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.
…
The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),
At a glance
The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.
Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.
Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.
The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.
Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.
…
While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:
*The ban of AI systems posing unacceptable risks will apply six months after the entry into force
*Codes of practice will apply nine months after entry into force
*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force
High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.
…
This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”
… The AI Act is expected to come into effect in late 2025 or early 2026.[109
I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.
A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,
Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.
A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.
Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.
The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.
The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.
“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI.
“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.
“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.
“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”
The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.
Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.
Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute.
The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”
Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.
For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.
The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.
“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.
Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.
“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”
These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.
AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.
The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.
They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.
Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”
Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.
The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.
“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.
*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.
As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.
This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.
Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,
Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.
The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.
Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.
The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]
But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.
…
Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.
…
As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”
South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.
…
Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,
…
… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.
“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”
Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”
But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.
…
Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.
As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.
That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.
The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.
That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]
It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.
…
Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)
And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].
…
More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.
I want to draw special attention to the second Politico article,
Kamala just showed Rishi who’s boss.
As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.
It was a raw show of U.S. power on the emerging technology.
…
Did she or was this an aggressive interpretation of events?
*’article’ changed to ‘articles’ on January 17, 2024.
It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*
How to handle non-human authors (ChatGPT and other AI agents)—the medical edition
The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,
Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1
In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11
…
This is a link to and a citation for the JAMA editorial,
Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,
Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.
…
We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.
To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.
Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,
…
ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.
…
Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.
Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.
Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …
…
Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.
More than writing: emergent behaviour
The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,
What movie do these emojis describe?
That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.
“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
…
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.
Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.
…
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.
…
But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.
As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”
…
There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.
Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,
Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI
…
Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”
Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing.
…
Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.
He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was incorporated and sold to Google for $44 million.
Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.
…
There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,
There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.
Nowadays, he’s not so sure.
“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”
…
For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.
Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”
But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes.
…
Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good.
“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.
“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”
…
Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”
“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.
He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.
…
“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.
Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,
As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.
Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.
…
Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.
“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms.
“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”
“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”
So when is all this happening?
“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].
While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.
But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.
The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.
…
As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.
Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.
“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.” The estimate for 2030 is more than $2 trillion.
…
This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.
And that was just this week.
…
“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”
Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”
Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.
But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.
“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)
…
Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.
“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”
Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …
…
… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them.
Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]
…
Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead
Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”
The last two existential AI panics
The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.
Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,
Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]
The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,
Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”
Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.
Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.
…
Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.
Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.
To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.
…
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.
According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.
The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.
Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.
The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
…
The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,
The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”
It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.
In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.
IEEE members have expressed a similar diversity of opinions.
…
There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.
…
As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.
You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.
Finally (but not quite)
Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.
Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,
The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.
Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.
It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.
Questioning doesn’t mean rejecting
Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life
…
In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.
The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.
Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.
…
In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.
In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.
In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”
…
Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.
I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.
…
Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.
I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.
In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”
…
The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.
…
All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.
The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)
…
Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,
…
If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.
On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.
The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.
Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.
Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts.
…
This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,
Event Speakers
Max Sills General Counsel at Midjourney
From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.
…
So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,
…
On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]
…
My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.
As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),
…
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
…
For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”
Addendum (June 1, 2023)
Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …
Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,
The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.
But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.
TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.
“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.
“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.
…
The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.
“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”
…
Fear, after all, is a powerful sales tool.
Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.
*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
Social justice
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]
From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
…
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
…
Eeek
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Project Description
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
…
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
…
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
…
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
…
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
…
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
…
If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*
You can’t always get what you want
My friend,
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
US-centric
My friend,
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
…
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]
…
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].
…
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.
Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.
In the end
It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.
July 27, 2022, the VAG held a virtual event with an artist,
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
…
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.
The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,
The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.
As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.
Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.
What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.
In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.
In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.
At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).
The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.
The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?
I’ll get back to that last bit, “… what does it mean to be human?” later.
There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.
In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.
David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.
Drilling down
I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.
In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).
Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.
The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)
In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.
The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.
Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.
Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”
AI and creativity
The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.
Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)
The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.
There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.
What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),
… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI.
This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]
Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]
“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]
So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)
The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),
…
The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.
…
[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.
…
“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”
Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),
You can hear Burroughs talk about the technique and how he started using it in 1959.
There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.
Kazuo Ishiguro?
Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?
AI and emotions
The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.
Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.
(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)
While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?
While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).
My February 14, 2019 posting features research with a completely different approach to emotions and machines,
“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”
This brings the question back to, what is consciousness?
What scientists aren’t taught
Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)
My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.
Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.
The experts, the connections, and the Canadian content
It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.
Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).
I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.
As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,
Google invests $4.5 Million in Montreal AI Research
…
A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].
Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:
Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),
Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.
In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.
“COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.“
– Yoshua Bengio, for Google’s Official Canada Blog
Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.
My hat’s off to Google’s marketing communications and public relations teams.
Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”
Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”
There is this from his LinkedIn profile,
I develop, create and host engaging live experiences & media to foster critical thinking.
I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.
There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)
Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.
Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.
Evolution
Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.
I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,
Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.
Get ready for Xenobots 2.0.
Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”
Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,
…
While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.
And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.
Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?
As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.
David Suzuki, where are you?
Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.
Artificial stupidity
Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,
Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.
Knight was using the term in its humorous, derogatory form.
Finally
The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.
To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.
Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.
For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.
*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.
I think the French title for this call is more informative “L’UNESCO et Mila s’associent pour lancer un appel à publications afin de mettre en lumière les faiblesses du développement de l’IA [l’intelligence artificiel].” Here’s my translation (in advance, apologies to all who find it clumsy), ‘UNESCO {United Nations Educational, Scientific, and Cultural Organization) and MILA (Montreal Institute for Learning Algorithms) are issuing a joint call for papers illuminating weaknesses in the development of artificial intelligence.
UNESCO in cooperation with Mila-Quebec Artificial Intelligence Institute [?], is launching a Call for Proposals to identify blind spots in AI Policy and Programme Development. The collective work will explore creative, novel and far-reaching approaches to tackling blind spots in AI.
All contributors are invited to answer the same question: what are the blind spots on which we must shed light in order for AI to benefit all?
Issues can address 1) blind spots in the development of AI as a technology 2) blind spots in the development of AI as a sector, and 3) blind spots in the development of public policies, global governance, and regulation for AI. There are no limits to the subjects to be addressed. These blind spots could include issues ranging from science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots. Proposals can be in creative formats, and the call for proposals is open to individuals from all academic backgrounds and sectors. Proposals from all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).
Call for proposals are open until 2 May 2021.
Selected proposals will be confirmed by 25 May.
Final proposals, if in written format, should be between 5000-7000 words and should be written in a style that is accessible to non-AI specialists and received by 1 September 2021.
To ensure inclusivity and a diversity of voices, for accepted contributions outside of academia, authors may request financial support available on a needs-based basis up to 1000 usd.
I really appreciate the breadth of the call with a range of blind spots such as “science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots” and, presumably, anything the convenors had not considered.
As well, they haven’t confined themselves to the ‘same old, same old’ contributors, “all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).”
I’m glad to see a refreshing approach being taken to a call for proposals. I wish them good luck.
The Québec connection
Mila (Montreal Institute for Learning Algorithms), UNESCO’s co-host for this call, was founded in 1993 according to its About Mila page,
Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.
Since 2017, [emphasis mine] Mila is the result of a partnership between the Université de Montréal and McGill University, closely linked with Polytechnique Montréal and HEC Montréal. Today, Mila gathers in its offices a vibrant community of professors, students, industrial partners and startups working in AI, making the institute the world’s largest academic research center in machine learning.
Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.
Unmentioned, the Pan-Canadian Artificial Intelligence (AI) Strategy was created and funded by the Canadian federal government in 2017. One of the beneficiaries was Mila. (Odd how 2017 was the year Mila found so many academic partners in its home province.) From the Pan-Canadian AI strategy webpage on the Invest Canada website (Note: Links have been removed),
The artificial intelligence (AI) and machine learning revolution is well underway, and Canada is at its forefront. From top-ranked educational institutions and market-leading tech companies to world-renowned researchers, Canada’s AI ecosystems are leading global AI developments.
To continue to foster this growth and maintain its leadership position, Canada launched the $125M Pan-Canadian Artificial Intelligence Strategy in 2017—making it the first country to release a national AI strategy.
…
The Pan-Canadian AI Strategy is founded on a partnership between the Canadian Institute for Advanced Research (CIFAR) and the three centres of excellence: the Alberta Machine Intelligence Institute (AMII) in Edmonton, the Vector Institute in Toronto, and the Montreal Institute for Learning Algorithms (Mila) [all emphases mine] in Montreal. Together, they provide the support, resources, and talent for AI innovation and investment.
I don’t know where “Mila-Quebec Artificial Intelligence Institute” comes from. It’s not on their own website and I’ve never seen Mila called that anywhere other than on this UNESCO call.