Tag Archives: Florian Martin-Bariteau

Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT

The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),

In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.

As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy. 

Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us. 

Date: Sep 28 Time: 12:00 pm – 1:30 pm EDT Event Category: Virtual Session

Register Here

For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:

Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,

Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.

She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities. 

She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany. 

Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.

Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.

She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.

Panelist: Brenda McPhail (from her Centre for International Governance Innovation profile page),

Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.

Panelist: Nidhi Hegde (from her University of Alberta profile page),

My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.

More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.

Bio

Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.

Panelist: Benjamin Faveri (from his LinkedIn page),

About

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.

Panelist: Ori Freiman (from his eponymous website’s About page)

I research at the forefront of technological innovation. This website documents some of my academic activities.

My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.

I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.

The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,

Business Implications of Canada’s Draft AI and Data Act

On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.

Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.

Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.

The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.

The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.

If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.

Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI  accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”

The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.

Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,

The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.

Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.

“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.

François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.

The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.

Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.

The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.

For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

..

An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.

The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.

Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.

The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.

Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.

When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.

“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.

..

The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.

The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.

The bill also ensures that Canadians can request that their information be deleted from organizations.

The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.

Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.

Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.

Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,

… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations. 

Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.

The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.

I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.

  • June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
  • August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)

Council of Canadian Academies (CCA): science policy internship and a new panel on Public Safety in the Digital Age

It’s been a busy week for the Council of Canadian Academies (CCA); I don’t usually get two notices in such close order.

2022 science policy internship

The application deadline is Oct. 18, 2021, you will work remotely, and the stipend for the 2020 internship was $18,500 for six months.

Here’s more from a September 13, 2021 CCA notice (received Sept. 13, 2021 via email),

CCA Accepting Applications for Internship Program

The program provides interns with an opportunity to gain experience working at the interface of science and public policy. Interns will participate in the development of assessments by conducting research in support of CCA’s expert panel process.

The internship program is a full-time commitment of six months and will be a remote opportunity due to the Covid-19 pandemic.

Applicants must be recent graduates with a graduate or professional degree, or post-doctoral fellows, with a strong interest in the use of evidence for policy. The application deadline is October 18, 2021. The start date is January 10, 2022. Applications and letters of reference should be addressed to Anita Melnyk at internship@cca-reports.ca.

More information about the CCA Internship Program and the application process can be found here. [Note: The link takes you to a page with information about a 2020 internship opportunity; presumably, the application requirements have not changed.]

Good luck!

Expert Panel on Public Safety in the Digital Age Announced

I have a few comments (see the ‘Concerns and hopes’ subhead) about this future report but first, here’s the announcement of the expert panel that was convened to look into the matter of public safety (received via email September 15, 2021),

CCA Appoints Expert Panel on Public Safety in the Digital Age

Access to the internet and digital technologies are essential for people, businesses, and governments to carry out everyday activities. But as more and more activities move online, people and organizations are increasingly vulnerable to serious threats and harms that are enabled by constantly evolving technology. At the request of Public Safety Canada, [emphasis mine] the Council of Canadian Academies (CCA) has formed an Expert Panel to examine leading practices that could help address risks to public safety while respecting human rights and privacy. Jennifer Stoddart, O.C., Strategic Advisor, Privacy and Cybersecurity Group, Fasken Martineau DuMoulin [law firm], will serve as Chair of the Expert Panel.

“The ever-evolving nature of crimes and threats that take place online present a huge challenge for governments and law enforcement,” said Ms. Stoddart. “Safeguarding public safety while protecting civil liberties requires a better understanding of the impacts of advances in digital technology and the challenges they create.”

As Chair, Ms. Stoddart will lead a multidisciplinary group with expertise in cybersecurity, social sciences, criminology, law enforcement, and law and governance. The Panel will answer the following question:

Considering the impact that advances in information and communications technologies have had on a global scale, what do current evidence and knowledge suggest regarding promising and leading practices that could be applied in Canada for investigating, preventing, and countering threats to public safety while respecting human rights and privacy?

“This is an important question, the answer to which will have both immediate and far-reaching implications for the safety and well-being of people living in Canada. Jennifer Stoddart and this expert panel are very well-positioned to answer it,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA.

More information about the assessment can be found here.

The Expert Panel on Public Safety in the Digital Age:

  • Jennifer Stoddart (Chair), O.C., Strategic Advisor, Privacy and Cybersecurity Group, Fasken Martineau DuMoulin [law firm].
  • Benoît Dupont, Professor, School of Criminology, and Canada Research Chair in Cybersecurity and Research Chair for the Prevention of Cybercrime, Université de Montréal; Scientific Director, Smart Cybersecurity Network (SERENE-RISC). Note: This is one of Canada’s Networks of Centres of Excellence (NCE)
  • Richard Frank, Associate Professor, School of Criminology, Simon Fraser University; Director, International CyberCrime Research Centre International. Note: This is an SFU/ Society for the Policing of Cyberspace (POLCYB) partnership
  • Colin Gavaghan, Director, New Zealand Law Foundation Centre for Law and Policy in Emerging Technologies, Faculty of Law, University of Otago.
  • Laura Huey, Professor, Department of Sociology, Western University; Founder, Canadian Society of Evidence Based Policing [Can-SEPB].
  • Emily Laidlaw, Associate Professor and Canada Research Chair in Cybersecurity Law, Faculty of Law, University of Calgary.
  • Arash Habibi Lashkari, Associate Professor, Faculty of Computer Science, University of New Brunswick; Research Coordinator, Canadian Institute of Cybersecurity [CIC].
  • Christian Leuprecht, Class of 1965 Professor in Leadership, Department of Political Science and Economics, Royal Military College; Director, Institute of Intergovernmental Relations, School of Policy Studies, Queen’s University.
  • Florian Martin-Bariteau, Associate Professor of Law and University Research Chair in Technology and Society, University of Ottawa; Director, Centre for Law, Technology and Society.
  • Shannon Parker, Detective/Constable, Saskatoon Police Service.
  • Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto.
  • Jad Saliba, Founder and Chief Technology Officer, Magnet Forensics Inc.
  • Heidi Tworek, Associate Professor, School of Public Policy and Global Affairs, and Department of History, University of British Columbia.

Oddly, there’s no mention that Jennifer Stoddart (Wikipedia entry) was Canada’s sixth privacy commissioner. Also, Fasken Martineau DuMoulin (her employer) changed its name to Fasken in 2017 (Wikipedia entry). The company currently has offices in Canada, UK, South Africa, and China (Firm webpage on company website).

Exactly how did the question get framed?

It’s always informative to look at the summary (from the reports Public Safety in the Digital Age webpage on the CCA website),

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors [emphasis mine] to target individuals, businesses, and systems. Ultimately, serious crime facilitated by technology and harmful online activities pose a threat to the safety and well-being of people in Canada and beyond.

Damaging or criminal online activities can be difficult to measure and often go unreported. Law enforcement agencies and other organizations working to address issues such as the sexual exploitation of children, human trafficking, and violent extremism [emphasis mine] must constantly adapt their tools and methods to try and prevent and respond to crimes committed online.

A better understanding of the impacts of these technological advances on public safety and the challenges they create could help to inform approaches to protecting public safety in Canada.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

The Sponsor:

Public Safety Canada

The Question:

Considering the impact that advances in information and communications technologies have had on a global scale, what do current evidence and knowledge suggest regarding promising and leading practices that could be applied in Canada for investigating, preventing, and countering threats to public safety while respecting human rights and privacy?

Three things stand out for me. First, public safety, what is it?, second, ‘malicious actors’, and third, the examples used for the issues being addressed (more about this in the Comments subsection, which follows).

What is public safety?

Before launching into any comments, here’s a description for Public Safety Canada (from their About webpage) where you’ll find a hodge podge,

Public Safety Canada was created in 2003 to ensure coordination across all federal departments and agencies responsible for national security and the safety of Canadians.

Our mandate is to keep Canadians safe from a range of risks such as natural disasters, crime and terrorism.

Our mission is to build a safe and resilient Canada.

The Public Safety Portfolio

A cohesive and integrated approach to Canada’s security requires cooperation across government. Together, these agencies have an annual budget of over $9 billion and more than 66,000 employees working in every part of the country.

Public Safety Partner Agencies

The Canada Border Services Agency (CBSA) manages the nation’s borders by enforcing Canadian laws governing trade and travel, as well as international agreements and conventions. CBSA facilitates legitimate cross-border traffic and supports economic development while stopping people and goods that pose a potential threat to Canada.

The Canadian Security Intelligence Service (CSIS) investigates and reports on activities that may pose a threat to the security of Canada. CSIS also provides security assessments, on request, to all federal departments and agencies.

The Correctional Service of Canada (CSC) helps protect society by encouraging offenders to become law-abiding citizens while exercising reasonable, safe, secure and humane control. CSC is responsible for managing offenders sentenced to two years or more in federal correctional institutions and under community supervision.

The Parole Board of Canada (PBC) is an independent body that grants, denies or revokes parole for inmates in federal prisons and provincial inmates in province without their own parole board. The PBC helps protect society by facilitating the timely reintegration of offenders into society as law-abiding citizens.

The Royal Canadian Mounted Police (RCMP) enforces Canadian laws, prevents crime and maintains peace, order and security.

So, Public Safety includes a spy agency (CSIS), the prison system (Correctional Services and Parole Board), and the national police force (RCMP) and law enforcement at the borders with the Canada Border Services Agency (CBSA). None of the partner agencies are dedicated to natural disasters although it’s mentioned in the department’s mandate.

The focus is largely on criminal activity and espionage. On that note, a very senior civilian RCMP intelligence official, Cameron Ortis*, was charged with passing secrets to foreign entities (malicious actors?). (See the September 13, 2021 [updated Sept. 15, 2021] news article by Amanda Connolly, Mercedes Stephenson, Stewart Bell, Sam Cooper & Rachel Browne for CTV news and the Sept. 18, 2019 [updated January 6, 2020] article by Douglas Quan for the National Post for more details.)

There appears to be at least one other major security breach; that involving Canada’s only level four laboratory, the Winnipeg-based National Microbiology Lab (NML). (See a June 10, 2021 article by Karen Pauls for Canadian Broadcasting Corporation news online for more details.)

As far as I’m aware, Ortis is still being held with a trial date scheduled for September 2022 (see Catherine Tunney’s April 9, 2021 article for CBC news online) and, to date, there have been no charges laid in the Winnipeg lab case.

Concerns and hopes

Ordinarily I’d note links and relationships between the various expert panel members but in this case it would be a big surprise if they weren’t linked in some fashion as the focus seems to be heavily focused on cybersecurity (as per the panel member’s bios.), which I imagine is a smallish community in Canada.

As I’ve made clear in the paragraphs leading into the comments, Canada appears to have seriously fumbled the ball where national and international cybersecurity is concerned.

So getting back to “First, public safety, what is it?, second, ‘malicious actors’, and third, the examples used for the issues,” I’m a bit puzzled.

Public safety as best I can tell, is just about anything they’d like it to be. ‘Malicious actors’ is a term I’ve seen used to imply a foreign power is behind the actions being held up for scrutiny.

The examples used for the issues being addressed “sexual exploitation of children, human trafficking, and violent extremism” hint at a focus on crimes that cross borders and criminal organizations, as well as, like-minded individuals organizing violent and extremist acts but not specifically at any national or international security concerns.

On a more mundane note, I’m a little surprised that identity theft wasn’t mentioned as an example.

I’m hopeful there will be some examination of emerging technologies such as quantum communication (specifically, encryption issues) and artificial intelligence. I also hope the report will include a discussion about mistakes and over reliance on technology (for a refresher course on what happens when organizations, such as the Canadian federal government, make mistakes in the digital world; search ‘Phoenix payroll system’, a 2016 made-in-Canada and preventable debacle, which to this day is still being fixed).

In the end, I think the only topic that can be safely excluded from the report is climate change otherwise it’s a pretty open mandate as far as can be told from publicly available information.

I noticed the international panel member is from New Zealand (the international component is almost always from the US, UK, northern Europe, and/or the Commonwealth). Given that New Zealand (as well as being part of the commonwealth) is one of the ‘Five Eyes Intelligence Community’, which includes Canada, Australia, the UK, the US, and, NZ, I was expecting a cybersecurity expert. If Professor Colin Gavaghan does have that expertise, it’s not obvious on his University of Otaga profile page (Note: Links have been removed),

Research interests

Colin is the first director of the New Zealand Law Foundation sponsored Centre for Law and Policy in Emerging Technologies. The Centre examines the legal, ethical and policy issues around new technologies. To date, the Centre has carried out work on biotechnology, nanotechnology, information and communication technologies and artificial intelligence.

In addition to emerging technologies, Colin lectures and writes on medical and criminal law.

Together with colleagues in Computer Science and Philosophy, Colin is the leader of a three-year project exploring the legal, ethical and social implications of artificial intelligence for New Zealand.

Background

Colin regularly advises on matters of technology and regulation. He is first Chair of the NZ Police’s Advisory Panel on Emergent Technologies, and a member of the Digital Council for Aotearoa, which advises the Government on digital technologies. Since 2017, he has been a member (and more recently Deputy Chair) of the Advisory Committee on Assisted Reproductive Technology. He was an expert witness in the High Court case of Seales v Attorney General, and has advised members of parliament on draft legislation.

He is a frustrated writer of science fiction, but compensates with occasional appearances on panels at SF conventions.

I appreciate the sense of humour evident in that last line.

Almost breaking news

Wednesday, September 15, 2021 an announcement of a new alliance in the Indo-Pacific region, the Three Eyes (Australia, UK, and US or AUKUS) was made.

Interestingly all three are part of the Five Eyes intelligence alliance comprised of Australia, Canada, New Zealand, UK, and US. Hmmm … Canada and New Zealand both border the Pacific and last I heard, the UK is still in Europe.

A September 17, 2021 article, “Canada caught off guard by exclusion from security pact” by Robert Fife and Steven Chase for the Globe and Mail (I’m quoting from my paper copy),

The Canadian government was surprised this week by the announcement of a new security pact among the United States, Britain and Australia, one that excluded Canada [and New Zealand too] and is aimed at confronting China’s growing military and political influence in the Indo-Pacific region, according to senior government officials.

Three officials, representing Canada’s Foreign Affairs, Intelligence and Defence departments, told the Globe and Mail that Ottawa was not consulted about the pact, and had no idea the trilateral security announcement was coming until it was made on Wednesday [September 15, 2021] by U.S. President Joe Biden, British Prime Minister Boris Johnson and Australian Prime Minister Scott Morrison.

The new trilateral alliance, dubbed AUKUS, after the initials of the three countries, will allow for greater sharing of information in areas such as artificial intelligence and cyber and underwater defence capabilities.

Fife and Chase have also written a September 17, 2021 Globe and Mail article titled, “Chinese Major-General worked with fired Winnipeg Lab scientist,”

… joint research conducted between Major-General Chen Wei and former Canadian government lab scientist Xiangguo Qiu indicates that co-operation between the Chinese military and scientists at the National Microbiology Laboratory (NML) went much higher than was previously known. The People’s Liberation Army is the military of China’s ruling Communist Party.

Given that no one overseeing the Canadian lab, which is a level 4 and which should have meant high security, seems to have known that Wei was a member of the military and with the Cameron Ortis situation still looming, would you have included Canada in the new pact?

*ETA September 20, 2021: For anyone who’s curious about the Cameron Ortis case, there’s a Fifth Estate documentary (approximately 46 minutes): The Smartest Guy in the Room: Cameron Ortis and the RCMP Secrets Scandal.