Tag Archives: Canadian AI Safety Institute

34th International Joint Conference on Artificial Intelligence (IJCAI): AI at the service of society (August 16 – 22, 2025) in Montréal (Canada)

The International Joint Conferences on Artificial Intelligence (IJCAI) have been going since 1969 and this year, it’s being held in Montréal. Here’s more from an August 15, 2025 International Joint Conferences on Artificial Intelligence news release on EurekAlert,

“AI at the service of society” is the guiding theme of the 34th International Joint Conference on Artificial Intelligence (IJCAI), taking place from August 16 to 22, 2025 in Montreal, Canada. Since its inception in 1969, IJCAI has played a pivotal role as a forum to showcase the frontiers of artificial intelligence research and applications and thus represents the oldest continuously running conference on artificial intelligence.

In 2025, the conference with more than 2000 attendees, has been brought to Canada by Gilles Pesant, the Local Arrangements Committee Chair, Professor in the Department of Computer and Software Engineering at Polytechnique Montréal and IVADO [Institut de valorisation des données] researcher. “What makes IJCAI special is that it brings together the latest research from many different areas of artificial intelligence. It’s a great opportunity for the Canadian AI community to showcase the world-class contributions and outstanding talent,` says the founder of the Quosséça research lab (QUebec Optimization and Satisfaction Strategies Exploiting Constraint Algorithms) and current President of the Association for Constraint Programming. Prof. Pesant is known for developing advanced algorithms for complex scheduling and planning problems. Among his current research interests are neuro-symbolic AI systems which combine machine learning and constraint programming.

Canada’s AI Leadership

This year marks the 30th anniversary of a breakthrough that transformed artificial intelligence by giving machines the ability to learn from and remember sequences such as speech, language, and time-series data – Long Short-Term Memory (LSTM) architecture. While not developed in Canada, the story of LSTM is intertwined with Canada’s leadership in artificial intelligence. During the “AI winter,” when much of the world abandoned neural networks, Canada became a refuge for pioneering AI research. Visionaries like Geoffrey Hinton, now a Nobel Prize winner, and Yoshua Bengio, among others, continued to advance deep learning despite widespread skepticism. Their perseverance and the resilience of the Canadian research community laid the foundation for the AI revolution that is transforming the world today. Canada continues to lead through such institutions as MILA, Vector Institute, AMII, IVADO, and the Canadian AI Safety Institute. 

The IJCAI 2025 program features a lineup of internationally recognised keynote speakers, covering the full spectrum of AI research, including:

Yoshua Bengio, a pioneer in representation learning and one of the godfathers of deep learning. He is a recipient of the 2018 Turing Award—often called the “Nobel Prize of Computing”—which he shares with Geoffrey Hinton and Yann LeCun for demonstrating how deep learning models can scale effectively with large datasets and computational power. Bengio is a professor at the Université de Montréal and the founder of Mila – Quebec AI Institute, one of the world’s largest academic labs dedicated to deep learning, which has helped establish Montreal as a global hub for AI research.

Every time someone uses a search engine or an AI-powered chatbot, they benefit from technologies that bridge the gap between human language and machine understanding — a challenge directly addressed by Heng Ji’s research. An invited IJCAI speaker, Ji is a professor at the University of Illinois Urbana-Champaign, renowned for her pioneering work on how AI systems extract and distill knowledge from vast amounts of unstructured data. Far from being confined to academia, she is also an active voice in AI policy, contributing her expertise to discussions on the ethical and responsible development of AI.

Luc De Raedt, professor of computer science at KU Leuven and director of Leuven.AI, is widely recognized for his pioneering contributions to integrating machine learning with symbolic reasoning. Beyond his research, he has played a significant leadership role in fostering public dialogue on responsible AI, spearheading initiatives and organizing debates on the societal impacts of AI to help shape conversations around ethical and trustworthy AI development. In his IJCAI2025 kenyote address he will talk about ‘Neurosymbolic AI : combining Data and Knowledge’.

In this effort, he is not alone. Bernhard Schölkopf, director at the Max Planck Institute for Intelligent Systems and co-founder of ELLIS (European Laboratory for Learning and Intelligent Systems), is another leading figure giving an invited talk on ‘From ML for science to causal digital twins’. In addition to his scientific contributions — particularly in kernel methods and causal inference — Schölkopf is a prominent advocate for ethical and trustworthy AI in Europe. He plays a key role in shaping AI research agendas and informing policy discussions around responsible AI.

The Montreal program also features invited talks by IJCAI 2025 awardees: Aditya Grover (UCLA and Inception Labs), recipient of the IJCAI-25 Computers and Thought Award; Rina Dechter (University of California, Irvine), recipient of the IJCAI-25 Award for Research Excellence; and Cynthia Rudin (Duke Univeristy), recipient of the IJCAI-25 John McCarthy Award.

The IJCAI 2025 scientific program highlights how AI is shaping both cutting-edge research and real-world impact. The AI, Arts & Creativity track explores AI’s growing role in generating and supporting creative work—from music and design to storytelling and architecture. The Human-Centred AI track addresses the challenges of building AI systems aligned with human values, integrating technical, cognitive, ethical, and societal perspectives. The AI for Social Good track focuses on AI-driven solutions for pressing global challenges, encouraging collaborations with governments, NGOs, and researchers to support initiatives like the UN Sustainable Development Goals. Meanwhile, the AI4Tech track showcases how AI is driving breakthroughs in critical technologies across sectors such as health, finance, mobility, and smart cities. Complementing these thematic tracks, IJCAI 2025 includes as well a set of impactful competitions and challenges to push the boundaries of applied AI, including the Challenge on Deepfake Detection and Localization, the AI for Drinking Water Chlorination Challenge, and the Pulmonary Fibrosis Segmentation Challenge. Together, these elements reflect the pulse of AI today—advancing science while addressing the needs of society. IJCAI 2025 also presents an AI Art Gallery featuring works that examine how machines balance agency and vulnerability, and how their interactions with humans and the environment shape future possibilities. These artworks engage with these questions through AI, robotics, AR, VR, and other emerging technologies.

The program also includes the AI Lounge: Between Wonder and Caution – Insights from Three Experts, an admission-free public discussion featuring science communication journalist in debate with three community representatives: Heng Ji (University of Illinois Urbana-Champaign), Kate Larson (University of Waterloo), and Cynthia Rudin (Duke University).

To support authors who may experience difficulties obtaining Canadian visas, a satellite event will be hosted in Guangzhou, China, from August 29 to August 31, 2025. 

The IJCAI 2025 conference is supported by its sponsors, including the Artificial Intelligence Journal (AIJ) and Palais des Congrès de Montréal (Diamond Sponsor), GMI Cloud, FinVolution Group, and Baidu and Ant Research as Silver Sponsors. 

Full Program

See full program at https://2025.ijcai.org/ 

Organizers and Institutional Support

Conference Chair: Shlomo Zilberstein University of Massachusetts, Amherst / USA

Program Chair: James Kwok, Hong Kong University of Science and Technology / China

Local Arrangements Committee Chair: Gilles Pesant, Polytechnique Montréal / Canada

Local Publicity chair:  Lina Marsso, Assistant Professor, Polytechnique Montréal / MiLA / Canada

Sponsorship / Exhibit / Industry Day Chair: Nancy Laramée, IVADO, Canada

Lead student journalist on social media: Liliane-Caroline Demers, Polytechnique Montreal

Webmaster: Mehil Shah, Dalhousie University, Canada

More information on the IJCAI’s website: https://2025.ijcai.org

Should you be interested in the parent organization, which began life in California, US, you can find out more here.

China’s ex-UK ambassador clashes with ‘AI godfather’ on panel at AI Action Summit in France (February 10 – 11, 2025)

The Artificial Intelligence (AI) Action Summit held from February 10 – 11, 2025 in Paris seems to have been pretty exciting, President Emanuel Macron announced a 09B euros investment in the French AI sector on February 10, 2025 (I have more in my February 13, 2025 posting [scroll down to the ‘What makes Canadian (and Greenlandic) minerals and water so important?’ subhead]). I also have this snippet, which suggests Macron is eager to provide an alternative to US domination in the field of AI, from a February 10, 2025 posting on CCGTN (China Global Television Network),

French President Emmanuel Macron announced on Sunday night [February 10, 2025] that France is set to receive a total investment of 109 billion euros (approximately $112 billion) in artificial intelligence over the coming years.

Speaking in a televised interview on public broadcaster France 2, Macron described the investment as “the equivalent for France of what the United States announced with ‘Stargate’.”

He noted that the funding will come from the United Arab Emirates, major American and Canadian investment funds [emphases mine], as well as French companies.

Prime Minister Justin Trudeau attended the AI Action Summit on Tuesday, February 11, 2025 according to a Canadian Broadcasting Corporation (CBC) news online article by Ashley Burke and Olivia Stefanovich,

Prime Minister Justin Trudeau warned U.S. Vice-President J.D. Vance that punishing tariffs on Canadian steel and aluminum will hurt his home state of Ohio, a senior Canadian official said. 

The two leaders met on the sidelines of an international summit in Paris Tuesday [February 11, 2025], as the Trump administration moves forward with its threat to impose 25 per cent tariffs on all steel and aluminum imports, including from its biggest supplier, Canada, effective March 12.

Speaking to reporters on Wednesday [February 12, 2025] as he departed from Brussels, Trudeau characterized the meeting as a brief chat that took place as the pair met.

“It was just a quick greeting exchange,” Trudeau said. “I highlighted that $2.2 billion worth of steel and aluminum exports from Canada go directly into the Ohio economy, often to go into manufacturing there.

“He nodded, and noted it, but it wasn’t a longer exchange than that.”

Vance didn’t respond to Canadian media’s questions about the tariffs while arriving at the summit on Tuesday [February 11, 2025].

Additional insight can be gained from a February 10, 2025 PBS (US Public Broadcasting Service) posting of an AP (Associated Press) article with contributions from Kelvin Chan and Angela Charlton in Paris, Ken Moritsugu in Beijing, and Aijaz Hussain in New Delhi,

JD Vance stepped onto the world stage this week for the first time as U.S. vice president, using a high-stakes AI summit in Paris and a security conference in Munich to amplify Donald Trump’s aggressive new approach to diplomacy.

The 40-year-old vice president, who was just 18 months into his tenure as a senator before joining Trump’s ticket, is expected, while in Paris, to push back on European efforts to tighten AI oversight while advocating for a more open, innovation-driven approach.

The AI summit has drawn world leaders, top tech executives, and policymakers to discuss artificial intelligence’s impact on global security, economics, and governance. High-profile attendees include Chinese Vice Premier Zhang Guoqing, signaling Beijing’s deep interest in shaping global AI standards.

Macron also called on “simplifying” rules in France and the European Union to allow AI advances, citing sectors like healthcare, mobility, energy, and “resynchronize with the rest of the world.”

“We are most of the time too slow,” he said.

The summit underscores a three-way race for AI supremacy: Europe striving to regulate and invest, China expanding access through state-backed tech giants, and the U.S. under Trump prioritizing a hands-off approach.

Vance has signaled he will use the Paris summit as a venue for candid discussions with world leaders on AI and geopolitics.

“I think there’s a lot that some of the leaders who are present at the AI summit could do to, frankly — bring the Russia-Ukraine conflict to a close, help us diplomatically there — and so we’re going to be focused on those meetings in France,” Vance told Breitbart News.

Vance is expected to meet separately Tuesday with Indian Prime Minister Narendra Modi and European Commission President Ursula von der Leyen, according to a person familiar with planning who spoke on the condition of anonymity.

Modi is co-hosting the summit with Macron in an effort to prevent the sector from becoming a U.S.-China battle.

Indian Foreign Secretary Vikram Misri stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”

But the U.S.-China rivalry overshadowed broader international talks.

The U.S.-China rivalry didn’t entirely overshadow the talks. At least one Chinese former diplomat chose to make her presence felt by chastising a Canadian academic according to a February 11, 2025 article by Matthew Broersma for silicon.co.uk

A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.

Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.

She also said tensions between the US and China were impeding the ability to develop AI safely.

… Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.

Zoe Kleinman’s February 10, 2025 article for the British Broadcasting Corporation (BBC) news online website also notes the encounter,

A former Chinese official poked fun at a major international AI safety report led by “AI Godfather” professor Yoshua Bengio and co-authored by 96 global experts – in front of him.

Fu Ying, former vice minister of foreign affairs and once China’s UK ambassador, is now an academic at Tsinghua University in Beijing.

The pair were speaking at a panel discussion ahead of a two-day global AI summit starting in Paris on Monday [February 10, 2025].

The aim of the summit is to unite world leaders, tech executives, and academics to examine AI’s impact on society, governance, and the environment.

Fu Ying began by thanking Canada’s Prof Bengio for the “very, very long” document, adding that the Chinese translation stretched to around 400 pages and she hadn’t finished reading it.

She also had a dig at the title of the AI Safety Institute – of which Prof Bengio is a member.

China now has its own equivalent; but they decided to call it The AI Development and Safety Network, she said, because there are lots of institutes already but this wording emphasised the importance of collaboration.

The AI Action Summit is welcoming guests from 80 countries, with OpenAI chief executive Sam Altman, Microsoft president Brad Smith and Google chief executive Sundar Pichai among the big names in US tech attending.

Elon Musk is not on the guest list but it is currently unknown whether he will decide to join them. [As of February 13, 2025, Mr. Musk did not attend the summit, which ended February 11, 2025.]

A key focus is regulating AI in an increasingly fractured world. The summit comes weeks after a seismic industry shift as China’s DeepSeek unveiled a powerful, low-cost AI model, challenging US dominance.

The pair’s heated exchanges were a symbol of global political jostling in the powerful AI arms race, but Fu Ying also expressed regret about the negative impact of current hostilities between the US and China on the progress of AI safety.

She gave a carefully-crafted glimpse behind the curtain of China’s AI scene, describing an “explosive period” of innovation since the country first published its AI development plan in 2017, five years before ChatGPT became a viral sensation in the west.

She added that “when the pace [of development] is rapid, risky stuff occurs” but did not elaborate on what might have taken place.

“The Chinese move faster [than the west] but it’s full of problems,” she said.

Fu Ying argued that building AI tools on foundations which are open source, meaning everyone can see how they work and therefore contribute to improving them, was the most effective way to make sure the tech did not cause harm.

Most of the US tech giants do not share the tech which drives their products.

Open source offers humans “better opportunities to detect and solve problems”, she said, adding that “the lack of transparency among the giants makes people nervous”.

But Prof Bengio disagreed.

His view was that open source also left the tech wide open for criminals to misuse.

He did however concede that “from a safety point of view”, it was easier to spot issues with the viral Chinese AI assistant DeepSeek, which was built using open source architecture, than ChatGPT, whose code has not been shared by its creator OpenAI.

Fro anyone curious about Professor Bengio’s AI safety report, I have more information in a September 29, 2025 Université de Montréal (UdeM) press release,

The first international report on the safety of artificial intelligence, led by Université de Montréal computer-science professor Yoshua Bengio, was released today and promises to serve as a guide for policymakers worldwide. 

Announced in November 2023 at the AI Safety Summit at Bletchley Park, England, and inspired by the workings of the United Nations Intergovernmental Panel on Climate Change, the report consolidates leading international expertise on AI and its risks. 

Supported by the United Kingdom’s Department for Science, Innovation and Technology, Bengio, founder and scientific director of the UdeM-affiliated Mila – Quebec AI Institute, led a team of 96 international experts in drafting the report.

The experts were drawn from 30 countries, the U.N., the European Union and the OECD [Organisation for Economic Cooperation and Development]. Their report will help inform discussions next month at the AI Action Summit in Paris, France and serve as a global handbook on AI safety to help support policymakers.

Towards a common understanding

The most advanced AI systems in the world now have the ability to write increasingly sophisticated computer programs, identify cyber vulnerabilities, and perform on a par with human PhD-level experts on tests in biology, chemistry, and physics. 

In what is identified as a key development for policymakers to monitor, the AI Safety Report published today warns that AI systems are also increasingly capable of acting as AI agents, autonomously planning and acting in pursuit of a goal. 

As policymakers worldwide grapple with the rapid and unpredictable advancements in AI, the report contributes to bridging the gap by offering a scientific understanding of emerging risks to guide decision-making.  

The document sets out the first comprehensive, independent, and shared scientific understanding of advanced AI systems and their risks, highlighting how quickly the technology has evolved.  

Several areas require urgent research attention, according to the report, including how rapidly capabilities will advance, how general-purpose AI models work internally, and how they can be designed to behave reliably. 

Three distinct categories of AI risks are identified: 

  • Malicious use risks: these include cyberattacks, the creation of AI-generated child-sexual-abuse material, and even the development of biological weapons; 
  • System malfunctions: these include bias, reliability issues, and the potential loss of control over advanced general-purpose AI systems; 
  • Systemic risks: these stem from the widespread adoption of AI, include workforce disruption, privacy concerns, and environmental impacts.  

The report places particular emphasis on the urgency of increasing transparency and understanding in AI decision-making as the systems become more sophisticated and the technology continues to develop at a rapid pace. 

While there are still many challenges in mitigating the risks of general-purpose AI, the report highlights promising areas for future research and concludes that progress can be made.   

Ultimately, it emphasizes that while AI capabilities could advance at varying speeds, their development and potential risks are not a foregone conclusion. The outcomes depend on the choices that societies and governments make today and in the future. 

“The capabilities of general-purpose AI have increased rapidly in recent years and months,” said Bengio. “While this holds great potential for society, AI also presents significant risks that must be carefully managed by governments worldwide.  

“This report by independent experts aims to facilitate constructive and evidence-based discussion around these risks and serves as a common basis for policymakers around the world to understand general-purpose AI capabilities, risks and possible mitigations.” 

The report is more formally known as the International AI Safety Report 2025 and can be found on the gov.uk website.

There have been two previous AI Safety Summits that I’m aware of and you can read about them in my May 21, 2024 posting about the one in Korea and in my November 2, 2023 posting about the first summit at Bletchley Park in the UK.

You can find the Canadian Artificial Intelligence Safety Institute (or AI Safety Institute) here and my coverage of DeepSeek’s release and the panic in the US artificial intelligence and the business communities that ensued in my January 29, 2025 posting.

AI and Canadian science diplomacy & more stories from the October 2024 Council of Canadian Academies (CCA) newsletter

The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,

Artificial Intelligence and Canada’s Science Diplomacy Future

For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.

“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”

On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:

  • Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
  • David Barnes, head of the British High Commission Science, Climate, and Energy Team
  • Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
  • Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
  • Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada

For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to  Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.

I have checked out the panel’s session page,

448: Strategy and Influence: AI and Canada’s Science Diplomacy Future

Friday, November 22 [2024]
1:00 pm – 2:30 pm EST

Science and International Affairs and Security

About

Organized By: Council of Canadian Academies (CCA)

Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.

The newsletter also provides links to additional readings on various topics, here are the AI items,

In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.

A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]

In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”

I clicked the link to read the Trudeau/Macron announcement and found this September 26, 2024 Innovation, Science and Economic Development Canada news release,

Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).

Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.

Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.

We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.

Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.

We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.

Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.

We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.

Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.

We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.

Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.

Some very interesting news and it reminded me of this October 10, 2024 posting “October 29, 2024 Woodrow Wilson Center event: 2024 Canada-US Legal Symposium | Artificial Intelligence Regulation, Governance, and Liability.” (I also included an update of the current state of Canadian legislation and artificial intelligence in the posting.)

I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,

Honoring the Life and Legacy of a Leader in AI Ethics

In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.


Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.

Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work. 

The Beginnings: Building a Global AI Ethics Community

Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.

When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.

I offer my sympathies to his family, friends, and communities for their profound loss.