This is going to be a jam-packed posting with the AI experts at the Canadian Science Policy Centre (CSPC) virtual panel, a look back at a ‘testy’ exchange between Yoshua Bengio (one of Canada’s godfathers of AI) and a former diplomat from China, an update on Canada’s Minister of Artificial Intelligence and Digital Innovation, Evan Solomon and his latest AI push, and a missive from the BC artificial intelligence community.
A Canadian Science Policy Centre AI panel on November 11, 2025
The Canadian Science Policy Centre (CSPC) provides an October 9, 2025 update on an upcoming virtual panel being held on Remembrance Day,
[AI-Driven Misinformation Across Sectors Addressing a Cross-Societal Challenge]
Upcoming Virtual Panel[s]: November 11 [2025]
Artificial Intelligence is transforming how information is created and trusted, offering immense benefits across sectors like healthcare, education, finance, and public discourse—yet also amplifying risks such as misinformation, deepfakes, and scams that threaten public trust. This panel brings together experts from diverse fields [emphasis mine] to examine the manifestations and impacts of AI-driven misinformation and to discuss policy, regulatory, and technical solutions [emphasis mine]. The conversation will highlight practical measures—from digital literacy and content verification to platform accountability—aimed at strengthening resilience in Canada and globally.
For more information on the panel and to register, click below.
Odd timing for this event. Moving on, I found more information on the CSPC’s webpage for this event, Note: Unfortunately, links to the moderator’s and speakers’ bios could not be copied here,
Canadian Science Policy Centre Email info@sciencepolicy.ca
…
This panel brings together cross-sectoral experts to examine how AI-driven misinformation manifests in their respective domains, its consequences, and how policy, regulation, and technical interventions can help mitigate harm. The discussion will explore practical pathways for action, such as digital literacy, risk audits, content verification technologies, platform responsibility, and regulatory frameworks. Attendees will leave with a nuanced understanding of both the risks and the resilience strategies being explored in Canada and globally.
Canada Research Chair in Internet & E-commerce Law, University of Ottawa See Bio
[Panelists]
Dr. Plinio Morita
Associate Professor / Director, Ubiquitous Health Technology Lab, University of Waterloo …
Dr. Nadia Naffi
Université Laval — Associate Professor of Educational Technology and expert on building human agency against AI-augmented disinformation and deepfakes. See Bio
Dr. Jutta Treviranus
Director, Inclusive Design Research Centre, OCAD U, Expert on AI misinformation in the Education sector and schools. See Bio
Dr. Fenwick McKelvey
Concordia University — Expert in political bots, information flows, and Canadian tech governance See Bio
Michael Geist has his own blog/website featuring posts on his ares of interest and featuring his podcast, Law Bytes. Jutta Treviranus is mentioned in my October 13, 2025 posting as a participant in “Who’s afraid of AI? Arts, Sciences, and the Futures of Intelligence,” a conference (October 23 – 24, 205) and arts festival at the University of Toronto (scroll down to find it) . She’s scheduled for a session on Thursday, October 23, 2025.
China, Canada, and the AI Action summit in February 2025
Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,
A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.
Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.
The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].
The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.
Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.
She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.
China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.
The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.
Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]
A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.
The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.
…
She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.
She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.
“The Chinese move faster [than the west] but it’s full of problems,” she said.
Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.
Most of the US tech giants do not share the tech which drives their products.
Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.
But Prof Bengio disagreed.
His view was that open source also left the tech wide open for criminals to misuse.
He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.
…
Interesting, non? You can read more about Bengio’s views in an October 1, 2025 article by Rae Witte for Futurism.
In a Policy Forum, Yue Zhu and colleagues provide an overview of China’s emerging regulation for artificial intelligence (AI) technologies and its potential contributions to global AI governance. Open-source AI systems from China are rapidly expanding worldwide, even as the country’s regulatory framework remains in flux. In general, AI governance suffers from fragmented approaches, a lack of clarity, and difficulty reconciling innovation with risk management, making global coordination especially hard in the face of rising controversy. Although no official AI law has yet been enacted, experts in China have drafted two influential proposals – the Model AI Law and the AI Law (Scholar’s Proposal) – which serve as key references for ongoing policy discussions. As the nation’s lawmakers prepare to draft a consolidated AI law, Zhu et al. note that the decisions will shape not only China’s innovation, but also global collaboration on AI safety, openness, and risk mitigation. Here, the authors discuss China’s emerging AI regulation as structured around 6 pillars, which, combined, stress exemptive laws, efficient adjudication, and experimentalist requirements, while safeguarding against extreme risks. This framework seeks to balance responsible oversight with pragmatic openness, allowing developers to innovate for the long term and collaborate across the global research community. According to Zhu et al., despite the need for greater clarity, harmonization, and simplification, China’s evolving model is poised to shape future legislation and contribute meaningfully to global AI governance by promoting both safety and innovation at a time when international cooperation on extreme risks is urgently needed.
Here’s a link to and a citation for the paper,
China’s emerging regulation toward an open future for AI by Yue Zhu, Bo He, Hongyu Fu, Naying Hu, Shaoqing Wu, Taolue Zhang, Xinyi Liu, Gang Xu, Linghan Zhang, and Hui Zhou. Science 9 Oct 2025Vol 390, Issue 6769 pp. 132-135 DOI: 10.1126/science.ady7922
This paper is behind a paywall.
No mention of Fu Ying or China’s ‘The AI Development and Safety Network’ but perhaps that’s in the paper.
Canada and its Minister of AI and Digital Innovation
Evan Solomon (born April 20, 1968)[citation needed] is a Canadian politician and broadcaster who has been the minister of artificial intelligence and digital innovation since May 2025. A member of the Liberal Party, Solomon was elected as the member of Parliament (MP) for Toronto Centre in the April 2025 election.
He was the host of The Evan Solomon Show on Toronto-area talk radio station CFRB,[2] and a writer for Maclean’s magazine. He was the host of CTV’s national political news programs Power Play and Question Period.[3] In October 2022, he moved to New York City to accept a position with the Eurasia Group as publisher of GZERO Media.[4] Solomon continued with CTV News as a “special correspondent” reporting on Canadian politics and global affairs.”[4]
…
Had you asked me what background one needs to be a ‘Minister of Artificial Intelligence and Digital Innovation’, media would not have been my first thought. That said, sometimes people can surprise you.
Solomon appears to be an enthusiast if a June 10, 2025 article by Anja Karadeglija for The Canadian Press is to be believed,
Canada’s new minister of artificial intelligence said Tuesday [June 10, 2025] he’ll put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits [emphases mine].
In his first speech since becoming Canada’s first-ever AI minister, Evan Solomon said Canada will move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI.
His regulatory focus will be on data protection and privacy, he told the audience at an event in Ottawa Tuesday morning organized by the think tank Canada 2020.
Solomon said regulation isn’t about finding “a saddle to throw on the bucking bronco called AI innovation. That’s hard. But it is to make sure that the horse doesn’t kick people in the face. And we need to protect people’s data and their privacy.”
The previous government introduced a privacy and AI regulation bill that targeted high-impact AI systems. It did not become law before the election was called.
That bill is “not gone, but we have to re-examine in this new environment where we’re going to be on that,” Solomon said.
He said constraints on AI have not worked at the international level.
“It’s really hard. There’s lots of leakages,” he said. “The United States and China have no desire to buy into any constraint or regulation.”
That doesn’t mean regulation won’t exist, he said, but it will have to be assembled in steps.
…
Solomon’s comments follow a global shift among governments to focus on AI adoption and away from AI safety and governance.
The first global summit focusing on AI safety was held in 2023 as experts warned of the technology’s dangers — including the risk that it could pose an existential threat to humanity. At a global meeting in Korea last year, countries agreed to launch a network of publicly backed safety institutes.
But the mood had shifted by the time this year’s AI Action Summit began in Paris. …
…
Solomon outlined several priorities for his ministry — scaling up Canada’s AI industry, driving adoption and ensuring Canadians have trust in and sovereignty over the technology.
He said that includes supporting Canadian AI companies like Cohere, which “means using government as essentially an industrial policy to champion our champions.”
The federal government is putting together a task force to guide its next steps on artificial intelligence, and Artificial Intelligence Minister Evan Solomon is promising an update to the government’s AI strategy.
Solomon told the All In artificial intelligence conference in Montreal on Wednesday [September 24, 2025] that the “refreshed” strategy will be tabled later this year, “almost two years ahead of schedule.”
…
“We need to update and move quickly,” he said in a keynote speech at the start of the conference.
The task force will include about 20 representatives from industry, academia and civil society. The government says it won’t reveal the membership until later this week.
Solomon said task force members are being asked to consult with their networks, suggest “bold, practical” ideas and report back to him in November [2025].
The group will look at various topics related to AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also planning to solicit input from the public. [emphasis mine]
Canada was the first country to launch a national AI strategy [the Pan-Canadian AI Strategy announced in 2016], which the government updated in 2022. The strategy focuses on commercialization, the development and adoption of AI standards, talent and research.
Solomon also teased a “major quantum initiative” coming in October [2025?] to ensure both quantum computing talent and intellectual property stay in the country.
Solomon called digital sovereignty “the most pressing policy and democratic issue of our time” and stressed the importance of Canada having its own “digital economy that someone else can’t decide to turn off.”
Solomon said the federal government’s recent focus on major projects extends to artificial intelligence. He compared current conversations on Canada’s AI framework to the way earlier generations spoke about a national railroad or highway.
…
He said his government will address concerns about AI by focusing on privacy reform and modernizing Canada’s 25-year-old privacy law.
“We’re going to include protections for consumers who are concerned about things like deep fakes and protection for children, because that’s a big, big issue. And we’re going to set clear standards for the use of data so innovators have clarity to unlock investment,” Solomon said.
…
The government is consulting with the public? Experience suggests that when all the major decisions will have been made; the public consultation comments will mined so officials can make some minor, unimportant tweaks.
Canada’s AI Task Force and parts of the Empire Club talk are revealed in a September 26, 2025 article by Alex Riehl for BetaKit,
Inovia Capital partner Patrick Pichette, Cohere chief artificial intelligence (AI) officer Joelle Pineau, and Build Canada founder Dan Debow are among 26 members of AI minister Evan Solomon’s AI Strategy Task Force trusted to help the federal government renew its AI strategy.
Solomon revealed the roster, filled with leading Canadian researchers and business figures, while speaking at the Empire Club in Toronto on Friday morning [September 26, 2025]. He teased its formation at the ALL IN conference earlier this week [September 24, 2025], saying the team would include “innovative thinkers from across the country.”
The group will have 30 days to add to a collective consultation process in areas including research, talent, commercialization, safety, education, infrastructure, and security.
…
The full AI Strategy Task Force is listed below; each member will consult their network on specific themes.
Research and Talent
Gail Murphy, professor of computer science and vice-president – research and innovation, University of British Columbia and vice-chair at the Digital Research Alliance of Canada
Diane Gutiw, VP – global AI research lead, CGI Canada and co-chair of the Advisory Council on AI
Michael Bowling, professor of computer science and principal investigator – Reinforcement Learning and Artificial Intelligence Lab, University of Alberta and research fellow, Alberta Machine Intelligence Institute and Canada CIFAR AI chair
Arvind Gupta, professor of computer science, University of Toronto
Adoption across industry and governments
Olivier Blais, co-founder and VP of AI, Moov and co-chair of the Advisory Council on AI
Cari Covent, technology executive
Dan Debow, chair of the board, Build Canada
Commercialization of AI
Louis Têtu, executive chairman, Coveo
Michael Serbinis, founder and CEO, League and board chair of the Perimeter Institute
Adam Keating, CEO and Founder, CoLab
Scaling our champions and attracting investment
Patrick Pichette, general partner, Inovia Capital
Ajay Agrawal, professor of strategic management, University of Toronto, founder, Next Canada and founder, Creative Destruction Lab
Sonia Sennik, CEO, Creative Destruction Lab
Ben Bergen, president, Council of Canadian Innovators
Building safe AI systems and public trust in AI
Mary Wells, dean of engineering, University of Waterloo
Joelle Pineau, chief AI officer, Cohere
Taylor Owen, founding director, Center [sic] for Media, Technology and Democracy [McGill University]
Education and Skills
Natiea Vinson, CEO, First Nations Technology Council
Alex Laplante, VP – cash management technology Canada, Royal Bank of Canada and board member at Mitacs
David Naylor, professor of medicine – University of Toronto
Infrastructure
Garth Gibson, chief technology and AI officer, VDURA
Ian Rae, president and CEO, Aptum
Marc Etienne Ouimette, chair of the board, Digital Moment and member, OECD One AI Group of Experts, affiliate researcher, sovereign AI, Cambridge University Bennett School of Public Policy
Security
Shelly Bruce, distinguished fellow, Centre for International Governance Innovation
James Neufeld, founder and CEO, Samdesk
Sam Ramadori, co-president and executive director, LawZero
With files from Josh Scott
If you have the time, Riehl ‘s September 26, 2025 article offers more depth than may be apparent in the excerpts I’ve chosen.
It’s been a while since I’ve seen Arvind Gupta’s name. I’m glad to see he’s part of this Task Force (Research and Talent). The man was treated quite shamefully at the University of British Columbia. (For the curious, this August 18, 2015 article by Ken MacQueen for Maclean’s Magazine presents a somewhat sanitized [in my opinion] review of the situation.)
One final comment, the experts on the virtual panel and members of Solomon’s Task Force are largely from Ontario and Québec. There is minor representation from others parts of the country but it is minor.
British Columbia wants entry into the national AI discussion
Just after I finished writing up this post, I received Kris Krug’s (techartist, quasi-sage, cyberpunk anti-hero from the future) October 14, 2025 communication (received via email) regarding an initiative from the BC + AI community,
Growth vs Guardrails: BC’s Framework for Steering AI
Our open letter to Minister Solomon shares what we’ve learned building community-led AI governance and how BC can help.
Ottawa created a Minister of Artificial Intelligence and just launched a national task force to shape the country’s next AI strategy. The conversation is happening right now about who gets compute, who sets the rules, and whose future this technology will serve.
Our new feature, Growth vs Guardrails [see link to letter below for ‘guardrails’], is already making the rounds in those rooms. The message is simple: if Ottawa’s foot is on the gas, BC is the steering wheel and the brakes. We can model a clean, ethical, community-led path that keeps power with people and place.
This is the time to show up together. Not as scattered voices, but as a connected movement with purpose, vision, and political gravity.
Over the past few months, almost 100 of us have joined as the new BC + AI Ecosystem Association non-profit as Founding Members. Builders. Artists. Researchers. Investors. Educators. Policymakers. People who believe that tech should serve communities, not the other way around.
Now we’re opening the door wider. Join and you’ll be part of the core group that built this from the ground up. Your membership is declaration that British Columbia deserves to shape its own AI future with ethics, creativity, and care.
If you’ve been watching from the sidelines, this is the time to lean in. We don’t do panels. We do portals. And this is the biggest one we’ve opened yet.
See you inside,
Kris Krüg Executive Director BC + AI Ecosystem Association kk@bc-ai.ca | bc-ai.ca
Canada just spun up a 30-day sprint to shape its next AI strategy. Minister Evan Solomon assembled 26 experts (mostly industry and academia) to advise on research, adoption, commercialization, safety, skills, and infrastructure.
On paper, it’s a pivot moment. In practice, it’s already drawing fire. Too much weight on scaling, not enough on governance. Too many boardrooms, not enough frontlines. Too much Ottawa, not enough ground truth.
…
This is Canada’s chance to reset the DNA of its AI ecosystem.
But only if we choose regeneration over extraction, sovereign data governance over corporate capture, and community benefit over narrow interests.
…
The Problem With The Task Force
Research says: The group’s stacked with expertise. But critics flag the imbalance. Where’s healthcare? Where’s civil society beyond token representation? Where are the people who’ll feel AI’s impact first: frontline workers, artists, community organizers?
…
The worry:Commercialization and scaling overshadow public trust, governance, and equitable outcomes. Again.
The numbers back this up: Only 24% of Canadians have AI training. Just 38% feel confident in their knowledge. Nearly two-thirds see potential harm. 71% would trust AI more under public regulation.
We’re building a national strategy on a foundation of low literacy and eroding trust. That’s not a recipe for sovereignty. That’s a recipe for capture.
Principles for a National AI Strategy: What BC + AI Stands For
The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,
Artificial Intelligence and Canada’s Science Diplomacy Future
For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.
“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”
On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:
Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
David Barnes, head of the British High Commission Science, Climate, and Energy Team
Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada
For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.
448: Strategy and Influence: AI and Canada’s Science Diplomacy Future
Friday, November 22 [2024] 1:00 pm – 2:30 pm EST
Science and International Affairs and Security
About
Organized By: Council of Canadian Academies (CCA)
Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.
The newsletter also provides links to additional readings on various topics, here are the AI items,
In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.
A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]
…
In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”
Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).
Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.
Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.
We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.
Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.
We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.
Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.
We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.
Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.
We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.
Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.
I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,
Honoring the Life and Legacy of a Leader in AI Ethics
In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.
Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.
Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.
The Beginnings: Building a Global AI Ethics Community
Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.
When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.
…
I offer my sympathies to his family, friends, and communities for their profound loss.