Tag Archives: Rishi Sunak

Prioritizing ethical & social considerations in emerging technologies—$16M in US National Science Foundation funding

I haven’t seen this much interest in the ethics and social impacts of emerging technologies in years. It seems that the latest AI (artificial intelligence) panic has stimulated interest not only in regulation but ethics too.

The latest information I have on this topic comes from a January 9, 2024 US National Science Foundation (NSF) news release (also received via email),

NSF and philanthropic partners announce $16 million in funding to prioritize ethical and social considerations in emerging technologies

ReDDDoT is a collaboration with five philanthropic partners and crosses
all disciplines of science and engineering_

The U.S. National Science Foundation today launched a new $16 million
program in collaboration with five philanthropic partners that seeks to
ensure ethical, legal, community and societal considerations are
embedded in the lifecycle of technology’s creation and use. The
Responsible Design, Development and Deployment of Technologies (ReDDDoT)
program aims to help create technologies that promote the public’s
wellbeing and mitigate potential harms.

“The design, development and deployment of technologies have broad
impacts on society,” said NSF Director Sethuraman Panchanathan. “As
discoveries and innovations are translated to practice, it is essential
that we engage and enable diverse communities to participate in this
work. NSF and its philanthropic partners share a strong commitment to
creating a comprehensive approach for co-design through soliciting
community input, incorporating community values and engaging a broad
array of academic and professional voices across the lifecycle of
technology creation and use.”

The ReDDDoT program invites proposals from multidisciplinary,
multi-sector teams that examine and demonstrate the principles,
methodologies and impacts associated with responsible design,
development and deployment of technologies, especially those specified
in the “CHIPS and Science Act of 2022.” In addition to NSF, the
program is funded and supported by the Ford Foundation, the Patrick J.
McGovern Foundation, Pivotal Ventures, Siegel Family Endowment and the
Eric and Wendy Schmidt Fund for Strategic Innovation.

“In recognition of the role responsible technologists can play to
advance human progress, and the danger unaccountable technology poses to
social justice, the ReDDDoT program serves as both a collaboration and a
covenant between philanthropy and government to center public interest
technology into the future of progress,” said Darren Walker, president
of the Ford Foundation. “This $16 million initiative will cultivate
expertise from public interest technologists across sectors who are
rooted in community and grounded by the belief that innovation, equity
and ethics must equally be the catalysts for technological progress.”

The broad goals of ReDDDoT include:  

*Stimulating activity and filling gaps in research, innovation and capacity building in the responsible design, development, and deployment of technologies.
* Creating broad and inclusive communities of interest that bring
together key stakeholders to better inform practices for the design,
development, and deployment of technologies.
* Educating and training the science, technology, engineering, and
mathematics workforce on approaches to responsible design,
development, and deployment of technologies. 
* Accelerating pathways to societal and economic benefits while
developing strategies to avoid or mitigate societal and economic harms.
* Empowering communities, including economically disadvantaged and
marginalized populations, to participate in all stages of technology
development, including the earliest stages of ideation and design.

Phase 1 of the program solicits proposals for Workshops, Planning
Grants, or the creation of Translational Research Coordination Networks,
while Phase 2 solicits full project proposals. The initial areas of
focus for 2024 include artificial intelligence, biotechnology or natural
and anthropogenic disaster prevention or mitigation. Future iterations
of the program may consider other key technology focus areas enumerated
in the CHIPS and Science Act.

For more information about ReDDDoT, visit the program website or register for an informational webinar on Feb. 9, 2024, at 2 p.m. ET.

Statements from NSF’s Partners

“The core belief at the heart of ReDDDoT – that technology should be
shaped by ethical, legal, and societal considerations as well as
community values – also drives the work of the Patrick J. McGovern
Foundation to build a human-centered digital future for all. We’re
pleased to support this partnership, committed to advancing the
development of AI, biotechnology, and climate technologies that advance
equity, sustainability, and justice.” – Vilas Dhar, President, Patrick
J. McGovern Foundation

“From generative AI to quantum computing, the pace of technology
development is only accelerating. Too often, technological advances are
not accompanied by discussion and design that considers negative impacts
or unrealized potential. We’re excited to support ReDDDoT as an
opportunity to uplift new and often forgotten perspectives that
critically examine technology’s impact on civic life, and advance Siegel
Family Endowment’s vision of technological change that includes and
improves the lives of all people.” – Katy Knight, President and
Executive Director of Siegel Family Endowment

Only eight months ago, another big NSF funding project was announced but this time focused on AI and promoting trust, from a May 4, 2023 University of Maryland (UMD) news release (also on EurekAlert), Note: A link has been removed,

The University of Maryland has been chosen to lead a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks, while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS) announced on May 4, 2023, unites specialists in AI and machine learning with social scientists, legal scholars, educators and public policy experts. The multidisciplinary team will work with impacted communities, private industry and the federal government to determine what trust in AI looks like, how to develop technical solutions for AI that can be trusted, and which policy models best create and sustain trust.

Funded by a $20 million award from NSF, the new institute is expected to transform the practice of AI from one driven primarily by technological innovation to one that is driven by ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

“As artificial intelligence continues to grow exponentially, we must embrace its potential for helping to solve the grand challenges of our time, as well as ensure that it is used both ethically and responsibly,” said UMD President Darryll J. Pines. “With strong federal support, this new institute will lead in defining the science and innovation needed to harness the power of AI for the benefit of the public good and all humankind.”

In addition to UMD, TRAILS will include faculty members from George Washington University (GW) and Morgan State University, with more support coming from Cornell University, the National Institute of Standards and Technology (NIST), and private sector organizations like the DataedX Group, Arthur AI, Checkstep, FinRegLab and Techstars.

At the heart of establishing the new institute is the consensus that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems, but today’s systems are developed and deployed in a process that is opaque and insular to the public, and therefore, often untrustworthy to those affected by the technology.

“We’ve structured our research goals to educate, learn from, recruit, retain and support communities whose voices are often not recognized in mainstream AI development,” said Hal Daumé III, a UMD professor of computer science who is lead principal investigator of the NSF award and will serve as the director of TRAILS.

Inappropriate trust in AI can result in many negative outcomes, Daumé said. People often “overtrust” AI systems to do things they’re fundamentally incapable of. This can lead to people or organizations giving up their own power to systems that are not acting in their best interest. At the same time, people can also “undertrust” AI systems, leading them to avoid using systems that could ultimately help them.

Given these conditions—and the fact that AI is increasingly being deployed to mediate society’s online communications, determine health care options, and offer guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

The first, known as participatory AI, advocates involving human stakeholders in the development, deployment and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people, rather than being controlled by a few experts or solely driven by profit.

Leading the efforts in participatory AI is Katie Shilton, an associate professor in UMD’s College of Information Studies who specializes in ethics and sociotechnical systems. Tom Goldstein, a UMD associate professor of computer science, will lead the institute’s second research thrust, developing advanced machine learning algorithms that reflect the values and interests of the relevant stakeholders.

Daumé, Shilton and Goldstein all have appointments in the University of Maryland Institute for Advanced Computer Studies, which is providing administrative and technical support for TRAILS.

David Broniatowski, an associate professor of engineering management and systems engineering at GW, will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

Virginia Byrne, an assistant professor of higher education and student affairs at Morgan State, will lead community-driven projects related to the interplay between AI and education. According to Daumé, the TRAILS team will rely heavily on Morgan State’s leadership—as Maryland’s preeminent public urban research university—in conducting rigorous, participatory community-based research with broad societal impacts.

Additional academic support will come from Valerie Reyna, a professor of human development at Cornell, who will use her expertise in human judgment and cognition to advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, test beds and certification methods—particularly as they apply to important topics essential to trust and trustworthiness such as safety, fairness, privacy, transparency, explainability, accountability, accuracy and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Today’s announcement [May 4, 2023] is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

“Maryland is at the forefront of our nation’s scientific innovation thanks to our talented workforce, top-tier universities, and federal partners,” said U.S. Sen. Chris Van Hollen (D-Md.). “This National Science Foundation award for the University of Maryland—in coordination with other Maryland-based research institutions including Morgan State University and NIST—will promote ethical and responsible AI development, with the goal of helping us harness the benefits of this powerful emerging technology while limiting the potential risks it poses. This investment entrusts Maryland with a critical priority for our shared future, recognizing the unparalleled ingenuity and world-class reputation of our institutions.” 

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “[They] are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

As noted in the UMD news release, this funding is part of a ‘bundle’, here’s more from the May 4, 2023 US NSF news release announcing the full $ 140 million funding program, Note: Links have been removed,

The U.S. National Science Foundation, in collaboration with other federal agencies, higher education institutions and other stakeholders, today announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes. The announcement is part of a broader effort across the federal government to advance a cohesive approach to AI-related opportunities and risks.

The new AI Institutes will advance foundational AI research that promotes ethical and trustworthy AI systems and technologies, develop novel approaches to cybersecurity, contribute to innovative solutions to climate change, expand the understanding of the brain, and leverage AI capabilities to enhance education and public health. The institutes will support the development of a diverse AI workforce in the U.S. and help address the risks and potential harms posed by AI.  This investment means  NSF and its funding partners have now invested close to half a billion dollars in the AI Institutes research network, which reaches almost every U.S. state.

“The National AI Research Institutes are a critical component of our nation’s AI innovation, infrastructure, technology, education and partnerships ecosystem,” said NSF Director Sethuraman Panchanathan. “These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.”

“These strategic federal investments will advance American AI infrastructure and innovation, so that AI can help tackle some of the biggest challenges we face, from climate change to health. Importantly, the growing network of National AI Research Institutes will promote responsible innovation that safeguards people’s safety and rights,” said White House Office of Science and Technology Policy Director Arati Prabhakar.

The new AI Institutes are interdisciplinary collaborations among top AI researchers and are supported by co-funding from the U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST); U.S. Department of Homeland Security’s Science and Technology Directorate (DHS S&T); U.S. Department of Agriculture’s National Institute of Food and Agriculture (USDA-NIFA); U.S. Department of Education’s Institute of Education Sciences (ED-IES); U.S. Department of Defense’s Office of the Undersecretary of Defense for Research and Engineering (DoD OUSD R&E); and IBM Corporation (IBM).

“Foundational research in AI and machine learning has never been more critical to the understanding, creation and deployment of AI-powered systems that deliver transformative and trustworthy solutions across our society,” said NSF Assistant Director for Computer and Information Science and Engineering Margaret Martonosi. “These recent awards, as well as our AI Institutes ecosystem as a whole, represent our active efforts in addressing national economic and societal priorities that hinge on our nation’s AI capability and leadership.”

The new AI Institutes focus on six research themes:

Trustworthy AI

NSF Institute for Trustworthy AI in Law & Society (TRAILS)

Led by the University of Maryland, TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights and support for communities whose voices have been marginalized into mainstream AI. TRAILS will be the first institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness. TRAILS is funded by a partnership between NSF and NIST.

Intelligent Agents for Next-Generation Cybersecurity

AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION)

Led by the University of California, Santa Barbara, this institute will develop novel approaches that leverage AI to anticipate and take corrective actions against cyberthreats that target the security and privacy of computer networks and their users. The team of researchers will work with experts in security operations to develop a revolutionary approach to cybersecurity, in which AI-enabled intelligent security agents cooperate with humans across the cyberdefense life cycle to jointly improve the resilience of security of computer systems over time. ACTION is funded by a partnership between NSF, DHS S&T, and IBM.

Climate Smart Agriculture and Forestry

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)

Led by the University of Minnesota Twin Cities, this institute aims to advance foundational AI by incorporating knowledge from agriculture and forestry sciences and leveraging these unique, new AI methods to curb climate effects while lifting rural economies. By creating a new scientific discipline and innovation ecosystem intersecting AI and climate-smart agriculture and forestry, our researchers and practitioners will discover and invent compelling AI-powered knowledge and solutions. Examples include AI-enhanced estimation methods of greenhouse gases and specialized field-to-market decision support tools. A key goal is to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision making. The institute will also expand and diversify rural and urban AI workforces. AI-CLIMATE is funded by USDA-NIFA.

Neural and Cognitive Foundations of Artificial Intelligence

AI Institute for Artificial and Natural Intelligence (ARNI)

Led by Columbia University, this institute will draw together top researchers across the country to focus on a national priority: connecting the major progress made in AI systems to the revolution in our understanding of the brain. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade. ARNI is funded by a partnership between NSF and DoD OUSD R&E.

AI for Decision Making

AI Institute for Societal Decision Making (AI-SDM)

Led by Carnegie Mellon University, this institute seeks to create human-centric AI for decision making to bolster effective response in uncertain, dynamic and resource-constrained scenarios like disaster management and public health. By bringing together an interdisciplinary team of AI and social science researchers, AI-SDM will enable emergency managers, public health officials, first responders, community workers and the public to make decisions that are data driven, robust, agile, resource efficient and trustworthy. The vision of the institute will be realized via development of AI theory and methods, translational research, training and outreach, enabled by partnerships with diverse universities, government organizations, corporate partners, community colleges, public libraries and high schools.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

AI Institute for Inclusive Intelligent Technologies for Education (INVITE)

Led by the University of Illinois Urbana-Champaign, this institute seeks to fundamentally reframe how educational technologies interact with learners by developing AI tools and approaches to support three crucial noncognitive skills known to underlie effective learning: persistence, academic resilience and collaboration. The institute’s use-inspired research will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers support and promote noncognitive skill development. The resultant AI-based tools will be integrated into classrooms to empower teachers to support learners in more developmentally appropriate ways.

AI Institute for Exceptional Education (AI4ExceptionalEd)

Led by the University at Buffalo, this institute will work toward universal speech and language screening for children. The framework, the AI screener, will analyze video and audio streams of children during classroom interactions and assess the need for evidence-based interventions tailored to individual needs of students. The institute will serve children in need of ability-based speech and language services, advance foundational AI technologies and enhance understanding of childhood speech and language development. The AI Institute for Exceptional Education was previously announced in January 2023. The INVITE and AI4ExceptionalEd institutes are funded by a partnership between NSF and ED-IES.

Statements from NSF’s Federal Government Funding Partners

“Increasing AI system trustworthiness while reducing its risks will be key to unleashing AI’s potential benefits and ensuring our shared societal values,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

“The ACTION Institute will help us better assess the opportunities and risks of rapidly evolving AI technology and its impact on DHS missions,” said Dimitri Kusnezov, DHS under secretary for science and technology. “This group of researchers and their ambition to push the limits of fundamental AI and apply new insights represents a significant investment in cybersecurity defense. These partnerships allow us to collectively remain on the forefront of leading-edge research for AI technologies.”

“In the tradition of USDA National Institute of Food and Agriculture investments, this new institute leverages the scientific power of U.S. land-grant universities informed by close partnership with farmers, producers, educators and innovators to address the grand challenge of rising greenhouse gas concentrations and associated climate change,” said Acting NIFA Director Dionne Toombs. “This innovative center will address the urgent need to counter climate-related threats, lower greenhouse gas emissions, grow the American workforce and increase new rural opportunities.”

“The leading-edge in AI research inevitably draws from our, so far, limited understanding of human cognition. This AI Institute seeks to unify the fields of AI and neuroscience to bring advanced designs and approaches to more capable and trustworthy AI, while also providing better understanding of the human brain,” said Bindu Nair, director, Basic Research Office, Office of the Undersecretary of Defense for Research and Engineering. “We are proud to partner with NSF in this critical field of research, as continued advancement in these areas holds the potential for further and significant benefits to national security, the economy and improvements in quality of life.”

“We are excited to partner with NSF on these two AI institutes,” said IES Director Mark Schneider. “We hope that they will provide valuable insights into how to tap modern technologies to improve the education sciences — but more importantly we hope that they will lead to better student outcomes and identify ways to free up the time of teachers to deliver more informed individualized instruction for the students they care so much about.” 

Learn more about the NSF AI Institutes by visiting nsf.gov.

Two things I noticed, (1) No mention of including ethics training or concepts in science and technology education and (2) No mention of integrating ethics and social issues into any of the AI Institutes. So, it seems that ‘Responsible Design, Development and Deployment of Technologies (ReDDDoT)’ occupies its own fiefdom.

Some sobering thoughts

Things can go terribly wrong with new technology as seen in the British television hit series, Mr. Bates vs. The Post Office (based on a true story) , from a January 9, 2024 posting by Ani Blundel for tellyvisions.org,

… what is this show that’s caused the entire country to rise up as one to defend the rights of the lowly sub-postal worker? Known as the “British Post Office scandal,” the incidents first began in 1999 when the U.K. postal system began to switch to digital systems, using the Horizon Accounting system to track the monies brought in. However, the IT system was faulty from the start, and rather than blame the technology, the British government accused, arrested, persecuted, and convicted over 700 postal workers of fraud and theft. This continued through 2015 when the glitch was finally recognized, and in 2019, the convictions were ruled to be a miscarriage of justice.

Here’s the series synopsis:

The drama tells the story of one of the greatest miscarriages of justice in British legal history. Hundreds of innocent sub-postmasters and postmistresses were wrongly accused of theft, fraud, and false accounting due to a defective IT system. Many of the wronged workers were prosecuted, some of whom were imprisoned for crimes they never committed, and their lives were irreparably ruined by the scandal. Following the landmark Court of Appeal decision to overturn their criminal convictions, dozens of former sub-postmasters and postmistresses have been exonerated on all counts as they battled to finally clear their names. They fought for over ten years, finally proving their innocence and sealing a resounding victory, but all involved believe the fight is not over yet, not by a long way.

Here’s a video trailer for ‘Mr. Bates vs. The Post Office,

More from Blundel’s January 9, 2024 posting, Note: A link has been removed,

The outcry from the general public against the government’s bureaucratic mismanagement and abuse of employees has been loud and sustained enough that Prime Minister Rishi Sunak had to come out with a statement condemning what happened back during the 2009 incident. Further, the current Justice Secretary, Alex Chalk, is now trying to figure out the fastest way to exonerate the hundreds of sub-post managers and sub-postmistresses who were wrongfully convicted back then and if there are steps to be taken to punish the post office a decade later.

It’s a horrifying story and the worst I’ve seen so far but, sadly, it’s not the only one of its kind.

Too often people’s concerns and worries about new technology are dismissed or trivialized. Somehow, all the work done to establish ethical standards and develop trust seems to be used as a kind of sop to the concerns rather than being integrated into the implementation of life-altering technologies.

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?

The 2023 Canadian federal budget: science & technology of the military and cybersecurity and some closing comments (2 of 2)

So far in part 1, the budget continues its focus on life sciences especially biomanufacturing, climate (clean energy, clean (?) manufacturing, mining, space, the forest economy, agriculture, and water. The budget also mentioned funding for the research councils (it’s been a while since I’ve seen them in a budget document).

Now, it’s time to focus on the military and cybersecurity. As noted in part 1 (third paragraph of my introduction), the military is often the source for technology (e.g., television, the internet, modern anesthesiology) that transforms society.

Chapter 5: Canada’s Leadership in the World

Finally, for this blog posting, there’s Chapter 5, from https://www.budget.canada.ca/2023/report-rapport/toc-tdm-en.html,

Key Ongoing Actions

In the past year, the federal government has announced a series of investments that have enhanced Canada’s security and our leadership around the world. These include:

  • $38.6 billion over 20 years to invest in the defence of North America and the modernization of NORAD [North American Aerospace Defense Command; emphasis mine];
  • More than $5.4 billion in assistance for Ukraine, including critical financial, military, and humanitarian support;
  • More than $545 million in emergency food and nutrition assistance in 2022-23 to help address the global food security crisis and respond to urgent hunger and nutrition needs;
  • $2.3 billion over the next five years to launch Canada’s Indo-Pacific Strategy [emphasis mine], including the further global capitalization of FinDev Canada [also known as Development Finance Institute Canada {DFIC}], which will deepen Canada’s engagement with our partners, support economic growth and regional security, and strengthen our ties with people in the Indo-Pacific;
  • $350 million over three years in international biodiversity financing, in addition to Canada’s commitment to provide $5.3 billion in climate financing over five years, to support developing countries’ efforts to protect nature;
  • $875 million over five years, and $238 million ongoing, to enhance Canada’s cybersecurity capabilities [emphasis mine];
  • Delivering on a commitment to spend $1.4 billion each year on global health, of which $700 million will be dedicated to sexual and reproductive health and rights for women and girls; and,
  • Channeling almost 30 per cent of Canada’s newly allocated International Monetary Fund (IMF) Special Drawing Rights to support low-income and vulnerable countries, surpassing the G7’s 20 per cent target.

Defence Policy Update

In response to a changed global security environment following Russia’s illegal invasion of Ukraine, the federal government committed in Budget 2022 to a Defence Policy Update that would update Canada’s existing defence policy, Strong, Secure, Engaged.

This review, including public consultations, is ongoing, and is focused on the roles, responsibilities, and capabilities of the Canadian Armed Forces. The Department of National Defence will return with a Defence Policy Update [emphasis mine] that will ensure the Canadian Armed Forces remain strong at home, secure in North America, and engaged around the world.

With this review ongoing, the Canadian Armed Forces have continued to protect Canada’s sovereignty in the Arctic, support our NATO allies in Eastern Europe, and contribute to operations in the Indo-Pacific.

In the past year, the government has made significant, foundational investments in Canada’s national defence, which total more than $55 billion over 20 years. These include:

  • $38.6 billion over 20 years to strengthen the defence of North America, reinforce Canada’s support of our partnership with the United States under NORAD, and protect our sovereignty in the North [emphases mine];
  • $2.1 billion over seven years, starting in 2022-23, and $706.0 million ongoing for Canada’s contribution to increasing NATO’s [North Atlantic Treaty Organization] common budget [emphasis mine];
  • $1.4 billion over 14 years, starting in 2023-24, to acquire new critical weapons systems needed to protect the Canadian Armed Forces in case of high intensity conflict, including air defence, anti-tank, and anti-drone capabilities;
  • $605.8 million over five years, starting in 2023-24, with $2.6 million in remaining amortization, to replenish the Canadian Armed Forces’ stocks of ammunition and explosives, and to replace materiel donated to Ukraine;
  • $562.2 million over six years, starting in 2022-23, with $112.0 million in remaining amortization, and $69 million ongoing to improve the digital systems of the Canadian Armed Forces;
  • Up to $90.4 million over five years, starting in 2022-23, to further support initiatives to increase the capabilities of the Canadian Armed Forces; and,
  • $30.1 million over four years, starting in 2023-24, and $10.4 million ongoing to establish the new North American regional office in Halifax for NATO’s Defence Innovation Accelerator for the North Atlantic [emphasis mine].

In addition, the government is providing $1.4 billion to upgrade the facilities of Joint Task Force 2, Canada’s elite counterterrorism unit.

Establishing the NATO Climate Change and Security Centre of Excellence in Montreal

Climate change has repercussions for people, economic security, public safety, and critical infrastructure around the world. It also poses a significant threat to global security, and in 2022, NATO’s new Strategic Concept recognized climate change for the first time as a major security challenge for the Alliance.

At the 2022 NATO Summit in Madrid, Montreal was announced as the host city for NATO’s new Climate Change and Security Centre of Excellence, which will bring together NATO allies to mitigate the impact of climate change on military activities and analyze new climate change-driven security challenges, such as the implications for Canada’s Arctic.

  • Budget 2023 proposes to provide $40.4 million over five years, starting in 2023-24, with $0.3 million in remaining amortization and $7 million ongoing, to Global Affairs Canada and the Department of National Defence to establish the NATO Climate Change and Security Centre of Excellence [emphasis mine].

Protecting Diaspora Communities and All Canadians From Foreign Interference, Threats, and Covert Activities

As an advanced economy and a free and diverse democracy, Canada’s strengths also make us a target for hostile states seeking to acquire information and technology [emphasis mine], intelligence, and influence to advance their own interests.

This can include foreign actors working to steal information from Canadian companies to benefit their domestic industries, hostile proxies intimidating diaspora communities in Canada because of their beliefs and values, or intelligence officers seeking to infiltrate Canada’s public and research institutions.

Authoritarian regimes, such as Russia, China, and Iran, believe they can act with impunity and meddle in the affairs of democracies—and democracies must act to defend ourselves. No one in Canada should ever be threatened by foreign actors, and Canadian businesses and Canada’s public institutions must be free of foreign interference.

  • Budget 2023 proposes to provide $48.9 million over three years on a cash basis, starting in 2023-24, to the Royal Canadian Mounted Police to protect Canadians from harassment and intimidation, increase its investigative capacity, and more proactively engage with communities at greater risk of being targeted. 
  • Budget 2023 proposes to provide $13.5 million over five years, starting in 2023-24, and $3.1 million ongoing to Public Safety Canada to establish a National Counter-Foreign Interference Office.

5.4 Combatting Financial Crime

Serious financial crimes, such as money laundering, terrorist financing, and the evasion of financial sanctions, threaten the safety of Canadians and the integrity of our financial system. Canada requires a comprehensive, responsive, and modern system to counter these sophisticated and rapidly evolving threats.

Canada must not be a financial haven for oligarchs or the kleptocratic apparatchiks of authoritarian, corrupt, or theocratic regimes—such as those of Russia, China, Iran, and Haiti. We will not allow our world-renowned financial system to be used to clandestinely and illegally move money to fund foreign interference inside Canada. [emphases mine]

Since 2019, the federal government has modernized Canada’s Anti-Money Laundering and Anti-Terrorist Financing (AML/ATF) Regime to address risks posed by new technologies and sectors, and made investments to strengthen Canada’s financial intelligence, information sharing, and investigative capacity.

Canada’s AML/ATF Regime must continue to be strengthened in order to combat the complex and evolving threats our democracy faces, and to ensure that Canada is never a haven for illicit financial flows or ill-gotten gains.

In Budget 2023, the government is proposing further important measures to deter, detect, and prosecute financial crimes, protect financial institutions from foreign interference, and protect Canadians from the emerging risks associated with crypto-assets.

Combatting Money Laundering and Terrorist Financing

Money laundering and terrorist financing can threaten the integrity of the Canadian economy, and put Canadians at risk by supporting terrorist activity, drug and human trafficking, and other criminal activities. Taking stronger action to tackle these threats is essential to protecting Canada’s economic security.

In June 2022, the Government of British Columbia released the final report of the Commission of Inquiry into Money Laundering in British Columbia [emphasis mine], also known as the Cullen Commission. This report highlighted major gaps in the current AML/ATF Regime, as well as areas for deepened federal-provincial collaboration [emphasis mine]. Between measures previously introduced and those proposed in Budget 2023, as well as through consultations that the government has committed to launching, the federal government will have responded to all of the recommendations within its jurisdiction in the Cullen Commission report.

In Budget 2023, the federal government is taking action to address gaps in Canada’s AML/ATF Regime, and strengthen cooperation between orders of government.

  • Budget 2023 announces the government’s intention to introduce legislative amendments to the Criminal Code and the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA) to strengthen the investigative, enforcement, and information sharing tools of Canada’s AML/ATF Regime.

These legislative changes will:

  • Give law enforcement the ability to freeze and seize virtual assets with suspected links to crime;
  • Improve financial intelligence information sharing between law enforcement and the Canada Revenue Agency (CRA), and law enforcement and the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC);
  • Introduce a new offence for structuring financial transactions to avoid FINTRAC reporting;
  • Strengthen the registration framework, including through criminal record checks, for currency dealers and other money services businesses to prevent their abuse;
  • Criminalize the operation of unregistered money services businesses;
  • Establish powers for FINTRAC to disseminate strategic analysis related to the financing of threats to the safety of Canada;
  • Provide whistleblowing protections for employees who report information to FINTRAC;
  • Broaden the use of non-compliance reports by FINTRAC in criminal investigations; and,
  • Set up obligations for the financial sector to report sanctions-related information to FINTRAC.

Strengthening Efforts Against Money Laundering and Terrorist Financing

In keeping with the requirements of the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA), the federal government will launch a parliamentary review of this act this year.

This review will include a public consultation that will examine ways to improve Canada’s Anti-Money Laundering and Terrorist Financing (AML/ATF) Regime, as well as examine how different orders of government can collaborate more closely. This will include how governments can better use existing tools to seize the proceeds of crime, and the potential need for new measures, such as unexplained wealth orders. Other topics of consultation will include, but will not be limited to, measures to support investigations and prosecutions, enhance information sharing, close regulatory gaps, examine the role of the AML/ATF Regime in protecting national and economic security, as well as the remaining recommendations from the Cullen Commission [also known as, the Commission of Inquiry into Money Laundering in British Columbia].

  • Budget 2023 announces that the government will bring forward further legislative amendments, to be informed by these consultations, to give the government more tools to fight money laundering and terrorist financing.

Canada is also leading the global fight against illicit financial flows, having been chosen to serve for two years, effective July 2023, as Vice President of the Financial Action Task Force (from which Russia has been suspended indefinitely), as well as co-Chair of the Asia/Pacific Group on Money Laundering for two years, [emphases mine] beginning in July 2022. 

Implementing a Publicly Accessible Federal Beneficial Ownership Registry

The use of anonymous Canadian shell companies can conceal the true ownership of property, businesses, and other valuable assets. When authorities don’t have the tools to determine their true ownership, these shell companies can become tools of those seeking to launder money, avoid taxes, evade sanctions, or interfere in our democracy.

To address this, the federal government committed in Budget 2022 to implementing a public, searchable beneficial ownership registry of federal corporations by the end of 2023.

This registry will cover corporations governed under the Canada Business Corporations Act, and will be scalable to allow access to the beneficial ownership data held by provinces and territories that agree to participate in a national registry.

While an initial round of amendments to the Canada Business Corporations Act received Royal Assent in June 2022, further amendments are needed to implement a beneficial ownership registry.

The government is introducing further amendments to the Canada Business Corporations Act and other laws, including the Proceeds of Crime (Money Laundering) and Terrorist Financing Act and the Income Tax Act, to implement a publicly accessible beneficial ownership registry through Bill C-42. This represents a major blow to money laundering operations and will be a powerful tool to strengthen the security and integrity of Canada’s economy.

The federal government will continue calling upon provincial and territorial governments to advance a national approach to beneficial ownership transparency to strengthen the fight against money laundering, tax evasion, and terrorist financing.

Modernizing Financial Sector Oversight to Address Emerging Risks

Canadians must be confident that federally regulated financial institutions and their owners act with integrity, and that Canada’s financial institutions are protected, including from foreign interference.

  • Budget 2023 announces the government’s intention to amend the Bank Act, the Insurance Companies Act, the Trust and Loan Companies Act, the Office of the Superintendent of Financial Institutions Act, and the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA) to modernize the federal financial framework to address emerging risks to Canada’s financial sector [emphasis mine].

These legislative changes will:

  • Expand the mandate of the Office of the Superintendent of Financial Institutions (OSFI) to include supervising federally regulated financial institutions (FRFIs) in order to determine whether they have adequate policies and procedures to protect themselves against threats to their integrity and security, including protection against foreign interference;
  • Expand the range of circumstances where OSFI can take control of an FRFI to include where the integrity and security of that FRFI is at risk, where all shareholders have been precluded from exercising their voting rights, or where there are national security risks;
  • Expand the existing authority for the Superintendent to issue a direction of compliance to include an act that threatens the integrity and security of an FRFI;
  • Provide new powers under the PCMLTFA to allow the Minister of Finance to impose enhanced due diligence requirements to protect Canada’s financial system from the financing of national security threats, and allow the Director of FINTRAC to share intelligence analysis with the Minister of Finance to help assess national security or financial integrity risks posed by financial entities;
  • Improve the sharing of compliance information between FINTRAC, OSFI, and the Minister of Finance; and,
  • Designate OSFI as a recipient of FINTRAC disclosures pertaining to threats to the security of Canada, where relevant to OSFI’s responsibilities.

The government will also review the mandate of FINTRAC to determine whether it should be expanded to counter sanctions evasion and will provide an update in the 2023 fall economic and fiscal update. In addition, the government will review whether FINTRAC’s mandate should evolve to include the financing of threats to Canada’s national and economic security as part of the parliamentary review.

These actions will continue the strong oversight of the financial sector that underpins a sound and stable Canadian economy.

Canada Financial Crimes Agency

To strengthen Canada’s ability to respond to complex cases of financial crime, Budget 2022 announced the government’s intent to establish a new Canada Financial Crimes Agency (CFCA), and provided $2 million to Public Safety Canada to undertake this work.

The CFCA will become Canada’s lead enforcement agency against financial crime. It will bring together expertise necessary to increase money laundering charges, prosecutions and convictions, and asset forfeiture results in Canada. These actions will address the key operational challenges identified in both domestic and international reviews of Canada’s AML/ATF Regime.

Public Safety Canada is developing options for the design of the CFCA, working in conjunction with federal, provincial and territorial partners and external experts, as well as engaging extensively with stakeholders. Further details on the structure and mandate of the CFCA will be provided by the 2023 fall economic and fiscal update.

Protecting Canadians from the Risks of Crypto-Assets

Ongoing turbulence in crypto-asset markets, and the recent high-profile failures of crypto trading platform FTX, and of Signature Bank, have demonstrated that crypto-assets can threaten the financial well-being of people, national security, and the stability and integrity of the global financial system.

To protect Canadians from the risks that come with crypto-assets, there is a clear need for different orders of government to take an active role in addressing consumer protection gaps and risks to our financial system.

The federal government is working closely with regulators and provincial and territorial partners to protect Canadians’ hard-earned savings and pensions, and Budget 2023 proposes new measures to protect Canadians.

  • To help protect Canadians’ savings and the security of our financial sector, Budget 2023 announces that the Office of the Superintendent of Financial Institutions (OSFI) will consult federally regulated financial institutions on guidelines for publicly disclosing their exposure to crypto-assets.

Secure pension plans are the cornerstone of a dignified retirement. While pension plan administrators are required to prudently manage their investments, the unique nature and evolving risks of crypto-assets and related activities require continued monitoring.

  • To help protect Canadians’ retirements, Budget 2023 announces that the government will require federally regulated pension funds to disclose their crypto-asset exposures to OSFI. The government will also work with provinces and territories to discuss crypto-asset or related activities disclosures by Canada’s largest pension plans, which would ensure Canadians are aware of their pension plan’s potential exposure to crypto-assets.

The federal government launched targeted consultations on crypto-assets as part of the review on the digitalization of money announced in Budget 2022. Moving forward, the government will continue to work closely with partners to advance the review, will bring forward proposals to protect Canadians from the risks of crypto-asset markets, and will provide further details in the 2023 fall economic and fiscal update.

Leadership in the world?

Maybe they should have titled chapter 5, “Shoring up Canada’s position in the world.” Most of the initiatives in this section of the budget seem oriented to catching up with the rest of world than forging new paths.

NORAD, NATO, and sovereignty in the North

It’s about time we contributed in a serious way to the “defence of North America and the modernization of NORAD [North American Aerospace Defense Command].” The same goes for NATO (North Atlantic Treaty Organization), where we have failed to contribute our fair share for decades. For the latest NATO/Canada dustup, you may want to read Murray Brewster’s April 7, 2023 article (NATO is getting ready to twist Canada’s arm on defence spending) for CBC news online,

In late March [2023?], NATO published an annual report that shows Canada’s defence spending amounted to just 1.29 per cent of GDP in fiscal 2022-2023.

It’s not much of a stretch to say Canada has no plan to meet that 2 per cent target. There wasn’t one when the previous Conservative government signed on to the notion at the 2014 NATO leaders summit (when the goal was for allies to reach 2 per cent by 2024).

The Liberal government’s 2017 defence policy tiptoed around the subject. Whenever they’ve been asked about it since, Trudeau and his ministers have bobbed and weaved and talked about what Canada delivers in terms of capability.

In an interview with the London bureau of CBC News this week, Foreign Affairs Minister Mélanie Joly was asked point-blank about Stoltenberg’s [NATO Sec. Gen. Jens Stoltenberg] assertion that allies would soon consider the two per cent benchmark the floor, not the ceiling.

She responded that Canada recognized the world changed with the war in Ukraine and that tensions in the Indo-Pacific [emphasis mine] mean “we need to make sure that we step up our game and that’s what we’ll do.”

“Step up our game” may be a relative term, because Joly went on to say that the government is engaged “in a very important defence policy review, which is required before announcing any further investments.”

That defence policy review was announced in the 2022 federal budget. A year later, shortly after presenting its latest fiscal plan, the government announced there would be public consultations on how best to defend Canada in a more uncertain world.

The best-case scenario, according to several experts, is the defence policy being delivered next year and the necessary investments being made at some indeterminate point in the future.

In fairness, the Liberals have committed to spending $19 billion on new fighter jets, starting in 2026. They have agreed to put $4.9 billion toward modernizing continental defence through NORAD.

The current Liberal government has followed a longstanding precedent of many Canadian governments (Liberal or Conservative) of inadequately funding national defence.

Getting back to NATO commitments, yes, let’s pull our weight. Further, I hope they follow through with protecting sovereignty in the North as there’s more than one jurisdiction interested in claiming to be an ‘Arctic” country of some kind, including Scotland (on behalf of the UK?). My November 20, 2020 posting features the Scottish claim. Interestingly, the closest Scottish land (Shetland Islands, which have a separatist movement of their own) is 400 miles south of the Arctic.

China too claims to be close to the Arctic despite being some 900 miles south at its closest point. Roslyn Layton’s August 31, 2022 article (Cold Front: The Arctic Emerges As A New Flashpoint Of Geopolitical Challenge) for Forbes magazine provides good insight into the situation, including this tidbit: ‘on August 26, 2022 the US appointed an Ambassador-At-Large for the Arctic’.

For a Canadian take on the challenges vis-à-vis China’s interest in the Arctic, there’s a January 12, 2021 posting on the University of Alberta’s China Institute website, “China in the Canadian Arctic: Context, Issues, and Considerations for 2021 and Beyond” by Evan Oddleifson, Tom Alton, and Scott N. Romaniuk. Note: A link has been removed,

Shandong Gold Mining Co., a Chinese state-owned enterprise, brokered a deal in May 2020 to acquire TMAC Resources, a Canadian gold mining company with operations in the Hope Bay region of Nunavut [emphasis mine]. At the time, observers voiced concern about the merits of the deal, speculating that Shandong Gold may be motivated by political/strategic interests rather than solely firm-level economic considerations.

I don’t see it mentioned in their posting, but the Shandong acquisition would be covered under the ‘Agreement Between the Government of Canada and the Government of the People’s Republic of China for the Promotion and Reciprocal Protection of Investments‘ (also known as Foreign Investment Protection Agreement (FIPA) or Foreign Investment Protection and Promotion Agreement (FIPPA). It was not a popular agreement in Canada when it was signed, from a September 12, 2014 opinion piece by Scott Harris for The Council of Canadians, Note: Links have been removed,

In the world of official government announcements, a two-paragraph media release sent out in the late afternoon on the Friday before Parliament resumes sitting is the best way for a government to admit, “We know this is really, really unpopular, but we’re doing it anyway.”

That’s the way the Harper government, by way of a release quoting Trade Minister Ed Fast, announced that it had decided to ignore widespread public opposition, parliamentary opposition from the NDP, Greens and even lukewarm Liberal criticism, an ongoing First Nations legal challenge, and even division at its own cabinet table and grassroots membership [emphases mine] and proceed with the ratification of the Canada-China Foreign Investment Promotion and Protection Agreement (FIPA).

With China’s ratification of the deal long since signed, sealed, and delivered (and, really, when you can convince another government to sign a deal this lopsided in your favour, wouldn’t you ratify as quickly as possible too?) Canada’s ratification of the deal means it will enter into force on October 1 [2014]. …

The CBC published this sharply (unusual for them) worded September 19, 2014 article by Patrick Brown for CBC news online,

The secrecy shrouding the much-delayed Foreign Investment Promotion and Protection Agreement (FIPA) with China makes it hard for experts, let alone average Canadians, to figure out what benefits this country will see from the deal.

While reporting on Prime Minister Stephen Harper for the first time, during his visit to China in early 2012, I wrote a column with the headline Great Glorious and Always Correct, which began: “If Stephen Harper ever gets tired of being Canada’s Prime Minister, he might like to consider a second career in China – he’d fit right in.” [Note: Our current Prime Minister, Justin Trudeau, once indicated that he found some aspects of the Chinese Communist Government’s system appealing. November 8, 2013 (article) Trudeau under fire for expressing admiration for China’s ‘basic dictatorship’]

The government revealed the text of the FIPA agreement and signed [emphasis mine] it in Vladivostok not long after the Beijing trip, and gave a briefing to the parliamentary trade committee for one single hour in October 2012 [emphasis mine]. Then the cone of silence descended.

Then late Friday afternoon, the witching hour favoured by spinmeisters with stealthy announcements they hope everyone will forget by Monday, a press release revealed that the agreement with China, mysteriously unratified [emphasis mine] for almost two years, has been approved by cabinet. It goes into effect Oct. 1 [2014].

Critics of the agreement, such as Gus Van Harten, an Osgoode Hall law professor who has written two books on investment treaties, raise several key objections:

  • Canadian governments are locked in for a generation. If Canada finds the deal unsatisfactory, it cannot be cancelled completely for 31 years [emphasis mine].
  • China benefits much more than Canada, because of a clause allowing existing restrictions in each country to stay in place. Chinese companies get to play on a relatively level field in Canada, while maintaining wildly arbitrary practices and rules for Canadian companies in China.
  • Chinese companies will be able to seek redress against any laws passed by any level of government in Canada which threaten their profits.[emphasis mine] Australia has decided not to enter FIPA agreements specifically because they allow powerful corporations to challenge legislation on social, environmental and economic issues. Chinese companies investing heavily in Canadian energy will be able seek billions in compensation if their projects are hampered by provincial laws on issues such as environmental concerns or First Nations rights, for example [emphasis mine].
  • Cases will be decided by a panel of professional arbitrators, and may be kept secret at the discretion of the sued party [emphasis mine]. This extraordinary provision reflects an aversion to transparency and public debate common to the Harper cabinet and the Chinese politburo.
  • Differences between FIPA and the North American Free Trade Agreement may offer intriguing loopholes for American lawyers [emphasis mine] to argue for equal treatment under the principle of Most Favoured Nation.

It is a reciprocal agreement but it seems China might have more advantages than Canada does. So, we’ll be able to establish sovereignty in the North, eh? That should be interesting.

There doesn’t seem to be any mention of involving the indigenous peoples in the discussion around sovereignty and the North. Perhaps it’s in another chapter.

An updated defence policy, the Indo-Pacific region, and cybersecurity

Hopefully they will do a little more than simply update Canada’s defence policy as Paul T. Mitchell’s (Professor of Defence Studies, Canadian Forces College) September 23, 2021 essay (Canada’s exclusion from the AUKUS security pact reveals a failing national defence policy) for The Conversation suggests there are some serious shortcomings dating back at least 100 years, Note: Links have been removed,

The recently announced deal on nuclear submarines between Australia, the United Kingdom and the United States, known as AUKUS, likely seems irrelevant to many Canadians.

But AUKUS is about far more than submarines. And Canada’s exclusion from the pact represents growing suspicions about the Canadian commitment to the rules-based international order.

The problem stems from Canada’s tacit “grand strategy” [emphasis mine] underlying our defence policy.

A country’s grand strategy typically outlines geopolitical realities alongside a plan to achieve its diplomatic goals.

In 1924, Liberal politician Raoul Dandurand famously said “Canada is a fire-proof house, far removed from flammable materials,” putting into words Canada’s approach to defence since 1867 [emphasis mine]. Simply put, three oceans and a superpower sufficiently shield us from having to think about how to achieve national security.

Canadian defence policy has never varied from three priorities — defend Canada, defend North America and contribute to international peace and security — that have appeared in every Defence Department white paper since the 1950s, regardless of the governing party. This attitude was evident in the recent election campaign, when discussions about defence were largely absent, despite growing threats from abroad and the turmoil within our own military.

Since the heydays of defence spending of the 1950s, the Canadian Armed Forces (CAF) have been gradually shedding fundamental capabilities — including long-range artillery, tanks, fighters that are now obsolete, submarine forces, destroyers and maritime logistics.

And while the CAF specifically faces new challenges in terms of diversity, its traditional approach to leadership has alienated thousands within the ranks, causing a rush to the exits, especially among the most experienced of personnel. The lack of support for modern equipment has also contributed to this problem.

We can continue to drag our heels, but eventually the bill will come due when our government commits our forces to a mission they can no longer fulfil because we thought we didn’t need to concern ourselves with the health of the military.

Just prior to the 2023 budget announcement, AUKUS (Australia, United Kingdom, and United States) moved forward on a security deal in the Indo-Pacific region. As far as I’m aware, the UK no longer has any exposure (colonies/territories) in the Indo-Pacific region while Canada, like the US, has one border (defined by the Pacific Ocean) on the region. Lee Berthiaume’s March 13, 2023 article for The Canadian Press can be found on both the CBC website and the CTV website,

Experts are warning that, as the U.S., Britain and Australia move ahead on an expanded military pact, Canada’s omission from that group suggests a larger problem with how this country is perceived by its friends. [emphasis mine]

U.S. President Joe Biden, British Prime Minister Rishi Sunak and Australian leader Anthony Albanese were at a naval base in San Diego on Monday [March 13, 2023] to confirm the next steps of the trilateral agreement, known as “AUKUS” after the three countries involved.

Those next steps include formalizing American and British plans to help Australia develop a fleet of nuclear-powered submarines in response to growing concerns about China’s actions in the Indo-Pacific region.

The Trudeau government has downplayed [emphasis mine] the importance of AUKUS to Canada, saying Ottawa is not in the market for nuclear-powered submarines — even as others have lamented its absence from the pact.

One senior Canadian Armed Forces commander, Vice-Admiral Bob Auchterlonie, told The Canadian Press he worries about Canada not having access to the same cutting-edge technology [emphasis mine] as three of its closest allies.

Canada’s exclusion is seen by some as further evidence that its allies do not believe Ottawa is serious about pushing back against Chinese ambitions, despite the release of a new Indo-Pacific strategy late last year.

Canada’s strategy seeks to strike a balance between confronting and co-operating with China. It says Canada will challenge China “in areas of profound disagreement” while working together on areas of shared interest, such as climate change.

The U.S. is taking a very different approach. In a defence strategy released earlier this month, U.S. Defense Secretary Lloyd Austin described “an increasingly aggressive China” as a “generational challenge” and the American military’s top priority.

Numbers remain uncertain, but Australia reportedly is poised to spend billions of dollars as part of the deal to purchase new submarines. Britain and the U.S. are also expected to put money into the agreement for technology development [emphasis mine], training and other areas.

Defence analyst David Perry of the Canadian Global Affairs Institute noted the U.S., Britain and Australia are all spending two per cent or more of their national gross domestic product on defence, compared to less than 1.3 per cent in Canada. [emphases mine]

They also have solid plans to build new submarines, while Ottawa has yet to even commit to replacing the Royal Canadian Navy’s four trouble-plagued Victoria-class vessels, let alone start work on plans to build or buy a new fleet.

Canadian military commanders — including chief of the defence staff Gen. Wayne Eyre — have repeatedly underscored the need for submarines.

Swiss cheese cybersecurity with a special emphasis on our research

While our defence spending is considered problematic, I suspect our cybersecurity is also in question, as I’ve noted previously in a September 17, 2021 posting (Council of Canadian Academies (CCA): science policy internship and a new panel on Public Safety in the Digital Age), scroll down about 45% of the way to the ‘What is public safety?’ subhead; look for the Cameron Ortis story (hints: (1) he was the top ranking civilian RCMP staff member reporting directly to the commissioner; (2) he was tasked with overseeing cybersecurity issues; and (3) he was fluent in Mandarin). You’ll also find information about the first AUKUS announcement and a link to a documentary about the Cameron Ortis affair further down under the ‘Almost breaking news’ subhead.

The Cameron Ortis story is shocking enough but when you include both the issues around the Winnipeg level 4 virology lab (see the Karen Pauls/Kimberly Ivany July 8, 2021 article for CBC online news for a summary and an analysis) and the National Research Council of Canada’s serious security breach (see the Tom Spears/Jordan Press July 29, 2014 article “Suspected Chinese cyber attack forces NRC security overhaul” for the Ottawa Citizen; it seems unbelievable and yet—it is.

This February 15, 2022 article for CBC news online by Catharine Tunney announces a report on a series of security breaches,

Gaps in Ottawa’s cyber defences could leave the government agencies holding vast amounts of data on Canadians and businesses susceptible to state-sponsored hackers from countries like China and Russia, says a new report from Parliament’s security and intelligence committee.

Their report, tabled late Monday in the House of Commons, shows previous mistakes have allowed state-sponsored actors to infiltrate and steal government information over the past decade.

“Cyber threats to government systems and networks are a significant risk to national security and the continuity of government operations,” says the report from the National Security and Intelligence Committee of Parliamentarians.

Intellectual property, advanced research already stolen

A year-long attack by China while Stephen Harper was prime minister served as a “wake-up call” for the federal government, said the report.

Between August 2010 and August 2011, China targeted 31 departments and eight suffered “severe compromises,” the report said. [emphases mine]

“Information losses were considerable, including email communications of senior government officials, mass exfiltration of information from several departments, including briefing notes, strategy documents and secret information, and password and file system data,” said the report.

In 2014 [emphasis mine], a Chinese state-sponsored actor was able to compromise the National Research Council.

“The theft included intellectual property and advanced research and proprietary business information from NRC’s partners. China also leveraged its access to the NRC network to infiltrate a number of government organizations,” said the report. [emphases mine]

You can find the National Security and Intelligence Committee of Parliamentarians Special Report on the Government of Canada’s Framework and Activities to Defend its Systems and Networks from Cyber Attack here. By the way, we have had a more recent ‘cyber incident’ at the National Research Council as reported in a March 21, 2022 article for CBC online news by Catharine Tunney.

Meanwhile, the 2022 budget (released April 7, 2022) made these announcements (from my April 19, 2022 posting; scroll down about 50% of the way to the ‘Securing Canada’s Research from Foreign Threats’ subhead),

To implement these guidelines fully, Budget 2022 proposes to provide $159.6 million, starting in 2022-23, and $33.4 million ongoing, as follows:

  • $125 million over five years, starting in 2022-23, and $25 million ongoing, for the Research Support Fund [emphasis mine] to build capacity within post- secondary institutions to identify, assess, and mitigate potential risks to research security; and
  • $34.6 million over five years, starting in 2022-23, and $8.4 million ongoing, to enhance Canada’s ability to protect our research, and to establish a Research Security Centre [emphasis mine] that will provide advice and guidance directly to research institutions.

The About page for the Research Support Fund on the SSHRC website (perhaps there’s another fund with the same name somewhere else?) doesn’t mention identifying, assessing, and/or mitigating risks to research security and I cannot find any mention of a Research Security Centre on the government of Canada website or from a Google search. The government doesn’t seem to be rushing to address these issues.

Cybersecurity for the rest of us

As a reminder, there’s this from Chapter 5,

Canada must not be a financial haven for oligarchs or the kleptocratic apparatchiks of authoritarian, corrupt, or theocratic regimes—such as those of Russia, China, Iran, and Haiti. We will not allow our world-renowned financial system to be used to clandestinely and illegally move money to fund foreign interference inside Canada. [emphases mine]

Money laundering and terrorist financing can threaten the integrity of the Canadian economy, and put Canadians at risk by supporting terrorist activity, drug and human trafficking, and other criminal activities. Taking stronger action to tackle these threats is essential to protecting Canada’s economic security.

In June 2022, the Government of British Columbia released the final report of the Commission of Inquiry into Money Laundering in British Columbia [emphasis mine], also known as the Cullen Commission. This report highlighted major gaps in the current AML/ATF Regime, as well as areas for deepened federal-provincial collaboration [emphasis mine]. Between measures previously introduced and those proposed in Budget 2023, as well as through consultations that the government has committed to launching, the federal government will have responded to all of the recommendations within its jurisdiction in the Cullen Commission report.

A March 29, 2023 article by Bob Mackin for The Breaker News comments on some of the government initiatives announced in the wake of the Cullen report about BC’s investigation into money laundering

In the wake of B.C. case collapse, Liberal government promises new regulation of money services businesses

The federal government has pledged to fill a gap in Canada’s anti-money laundering laws, less than a month after a special prosecutor’s report blamed it for the failure to bring a Richmond man to justice.

The Jin case had been one of the biggest organized crime investigations in B.C. history and was featured throughout the B.C. NDP government’s $19 million Cullen Commission public inquiry into money laundering. 

I want to emphasize that it’s not just the Chinese Communist Government that abuses our systems, there is other state-backed interference with various diaspora communities and other Canadian communities.

According to a February 14, 2023 article (The Canadian businessmen accused of helping Iran’s regime) for CBC news online by Ashley Burke and Nahayat Tizhoosh, it seems Iran has used Canada as a ‘safe haven’ for evading US sanctions and to conduct “… transactions on the Iranian regime’s behalf worth more than $750 million US.”

A February 15, 2023 followup article for CBC news online by Ashley Burke and Nahayat Tizhoosh reiterates the businessmen’s claims that the allegations are baseless and not proven in court. The article includes this,

Prime Minister Justin Trudeau says the government is working “very closely with American partners” and the Iranian diaspora in Canada to target people with ties to Iran’s regime.

Iranian-Canadians accuse the government of doing too little to ensure Canada isn’t a safe haven for the Iranian regime’s business transactions. [emphasis mine] Trudeau defended the government’s actions on Wednesday [February 15, 2023], citing the sanctions it has imposed since the fall on Iranian individuals and entities.

In October 2022 [emphasis mine], the government announced it would be spending $76 million to help enforce sanctions on Iran. Some of the money was earmarked for an additional 30 staff members at the RCMP. But the national police force confirmed it’s still working with the Department of Finance on receiving the funds. 

“The process is ongoing and its implementation is expected to start during next fiscal year [2024?],” wrote Robin Percival, a spokesperson for the RCMP.

If memory serves, the Iranian (Persian) diaspora communities like the Chinese communities in Canada have also warned of being targeted by overseas agents.

Coincidentally or not, the Council of Canadian Academies; Expert Panel on Public Safety in the Digital Age released its report, Vulnerable Connections on March 30, 2023. The expert panel was asked to answer these questions, from p. 17 (xvii) of the report,

How have activities relating to serious criminal activity (including organized crime and child sexual exploitation) and online harms (including disinformation, violent extremist and terrorist use of the internet) in Canada changed to exploit the evolving information and communications technologies (ICTs) landscape?

The headline for the CCA’s Vulnerable Connections March 30, 2023 news release highlights what the 2023 federal budget is tacitly confirming, “Advances in digital technology [are] outpacing efforts to address online harms: expert panel report” [emphasis mine].

As you can see from the budget, state-backed efforts extend from the financial sector to attempts to affect elections, intimidate various communities, and more. Let’s not forget the ‘average’ consumer; criminal enterprises are also active. So, the federal government is playing ‘catch up’ with a complex set of issues.

My final comments

As I noted, no splashy announcements but there are some interesting developments, especially with regard to military spending and a focus on cyber issues, and with the announcement of a Canada Water Agency. It all comes back to science and technology.

Always a little disappointment

There are a number of commentaries but I’m most interested in the ‘science and research’ aspect of the budget for which I found two.

First, Universities Canada issued a March 28, 2023 media release that states a position in its headline “Budget 2023 a missed opportunity to keep Canada competitive in science and research,”

“Canada’s universities are disappointed in Budget 2023’s lack of any significant support for Canadian research,” says Paul Davidson, President of Universities Canada. “With no new funding that matches the ambition of our peers, today’s budget is without the investments across Canada’s research ecosystem which are urgently needed to keep Canada competitive and ensure inclusive and sustainable growth.” 

Second, a March 28, 2023 Federation for Humanities and Social Sciences briefing note holds forth in a similar fashion,

Without a plan to attract and support our next generation of researchers, Budget 2023 is a missed opportunity for Canada. Despite a call for action from students, researchers, and our post-secondary sector, the Budget lacks urgently-needed investments in Canada’s graduate students and early-career researchers.

Over the past two decades, Canada’s graduate students and postdoctoral fellows have faced rising living costs and stagnating funding levels. Dedicated support is needed to reverse the declining value of this funding, and to ensure it remains competitive and keeps pace with rising inflation. Investment in Canada’s research talent and skilled workforce will bolster our capacity to solve our most important challenges here in Canada and internationally.

Disappointment is inevitable with any budget. The writers have a point but where will the money come from? We’re having to spend money on our military after ignoring it for decades. As well, we have security issues that are urgent not to mention the oncoming climate crisis, which faces everyone research scientists and graduate students included.

Will our money be well spent?

I don’t think I’ve ever addressed that question in any of my budget commentaries over the years.

The emphasis on the climate is encouraging as is the potential Canada Water Agency. and finally addressing some of our obligations to our military is a relief. (I have a friend who’s been in the regular Army and in the reserves and he’s been muttering about a lack of and outdated equipment for years.) Assuming the government follows through, it’s about time.

As for obligations to NATO and NORAD, our partners have rightly been applying pressure. It’s good to see money going to NORAD and perhaps the promised Defence Policy update will include policy that supports greater contributions to NATO and, since we’ve been left out of AUKUS, perhaps some fleshing out of the federal government’s new found interest in the Indo-Pacific Region. (This represents a seismic shift from the federal government’s almost exclusively eurocentric focus when looking beyond the border with our southern neighbour, the US.)

The greatest focus for election interference and influence on immigrant diaspora communities has been on the Chinese Communist Government (CCG) but there are others. The Trudeau government’s response has not been especially reassuring even with these budget promises. The best summary of the current situation regarding CCG interference that I’ve seen is an April 13, 2023 article by Frederick Kelter for Al Jazeera.

We definitely could do with a boost in our cybersecurity. The latest incident may have involved Russian hackers according to an April 13, 2023 article for CBC news online by Catharine Tunney, Note: A link has been removed,

One of Canada’s intelligence agencies says a cyber threat actor “had the potential to cause physical damage” to a piece of critical infrastructure recently, a stark warning from the Communications Security Establishment [CSE] amid a string of hits linked to pro-Russian hackers.

“I can report there was no physical damage to any Canadian energy infrastructure. But make no mistake — the threat is real,” said Sami Khoury, head of the CSE’s Canadian Centre for Cyber Security during briefing with reporters Thursday.

Earlier this week, leaked U.S. intelligence documents suggested Russian-backed hackers successfully gained access to Canada’s natural gas distribution network.

Defence Minsiter Anita Anand said Canada has seen a “notable rise in cyber threat activity by Russian-aligned” [sic] and issued a cyber flash on April 12 to let critical Canadian sectors know about an ongoing campaign.

Earlier Thursday [April 13, 2023], a pro-Russian hacking group claimed responsibility for a cyberattack on Hydro Quebec, the province’s state-owned electricity provider.

The same group took credit for knocking the Prime Minister Office website offline [emphasis mine] earlier this week in a distributed denial-of-service attack as Canada played host to Ukrainian Prime Minister Denys Shmyhal.

Denial-of-service attacks flood the target website with traffic, triggering a crash. Earlier this week CSE said these types of attacks have very little impact on the affected systems.

Khoury said that state-sponsored cyber threat actors like to target critical infrastructure “to collect information through espionage, pre-position in case of future hostilities, and as a form of power projection and intimidation.”

My answer (will our money be well spent?) is yes, assuming at least some of it goes to the projects mentioned.

Shifting winds

The world seems an unfriendlier place these days, which is what I see reflected in this budget. Multiple references to Russia’s invasion of Ukraine and a major focus on Canada’s financial system being used to support various state-sponsored hostile actions suggest the Canadian government is beginning to recognize a need to respond appropriately and, I hope, in a more time sensitive fashion than is usual for a government that is known for ‘dragging its feet’.

There’s a quite interesting April 19, 2023 article by Alexander Panetta for CBC news online, which highlights some of the tensions, Note: A link has been removed,

A massive leak of U.S. national security documents has now spilled over into Canada.

The Washington Post says it has seen a Pentagon document criticizing Canada’s military readiness among materials allegedly posted online by a Massachusetts Air National Guardsman arrested last week.

The purported document, which CBC has not seen, makes two broad claims, according to the Post.

First, it says that Prime Minister Justin Trudeau has told NATO officials privately that Canada will never reach the military spending target agreed to by members of the alliance.

Second, the document claims wide-ranging deficiencies in Canada’s military capabilities are a source of tension with allies and defence partners.

That US security leak is a good reminder that everyone has problems with security and (not mentioned in Panetta’s article) that the US spies on its ‘friends’. BTW, Canada does the same thing. Everyone spies on everyone.

The past and the future and the now

Strikingly, the 2023 budget went back in time to revisit the ‘staples theory’ when mining, agriculture, and forestry were Canadian economic mainstays.

It’s the first time I’ve seen a reference in a budget to a perennial R&D (research and development) problem, Canadian companies do not invest in research. We score low on investment in industrial research when compared to other countries. In fact, a 2018 report from the Council of Canadian Academies, “Competing in a Global Innovation Economy: The Current State of R&D in Canada; The Expert Panel on the State of Science and Technology and Industrial Research and Development in Canada” executive summary states this,

… The number of researchers per capita in Canada is on a par with that of other developed countries, and increased modestly between 2004 and 2012. Canada’s output of PhD graduates has also grown in recent years, though it remains low in per capita terms relative to many OECD countries.

In contrast, the number of R&D personnel employed in Canadian businesses dropped by 20% between 2008 and 2013. This is likely related to sustained and ongoing decline in business R&D investment across the country. R&D as a share of gross domestic product (GDP) has steadily declined in Canada since 2001, and now stands well below the OECD average (Figure 1). As one of few OECD countries with virtually no growth in total national R&D expenditures between 2006 and 2015, Canada would now need to more than double expenditures to achieve an R&D intensity comparable to that of leading countries. [p. xviii on paper or p. 20 PDF]

Our problems in this area will not be solved quickly. Nor will our defence issues; it’s about time we started paying our way with NORAD and NATO and elsewhere. By the way, I wasn’t expecting to find a “new North American regional office in Halifax for NATO’s Defence Innovation Accelerator for the North Atlantic” (innovation, in this case, being code for technology). This hearkens back to the military being a resource for scientific and technological advances, which affect us all.

More generally, it’s good to see some emphasis on the climate and on water and food security.

There was one odd note regarding the 2023 budget according to an April 18, 2023 CBC news online article by Janyce McGregor (confession: I missed it),

King Charles, Canada’s head of state, will no longer include the phrase “Defender of the Faith” in his official royal title in Canada.

The new language was revealed late Monday [April 17, 2023] when the Liberal government published its notice of the Ways and Means Motion for this spring’s budget implementation bill — the legislation that actually brings into force the measures Finance Minister Chrystia Freeland announced on March 28.

The new title will read: “Charles the Third, by the Grace of God King of Canada and His other Realms and Territories, Head of the Commonwealth.”

In addition to dropping the religious role, the revised title also deletes a reference to Charles as King of the United Kingdom — an update consistent with Canada’s status as an independent country among the 14 other countries that share the same monarch.

Getting back to the budget, I’m mildly optimistic.