Machine decision-making (artificial intelligence) in British Columbia’s government (Canada)

Jeremy Hainsworth’s September 19, 2023 article on the Vancouver is Awesome website was like a dash of cold water. I had no idea that plans for using AI (artificial intelligence) in municipal administration were so far advanced (although I did cover this AI development, “Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction” in a November 23, 2017 posting). From Hainsworth’s September 19, 2023 article, Note: A link has been removed,

Human discretion and the ability to follow decision-making must remain top of mind employing artificial intelligence (AI) to providing public services, Union of BC Municipalities conference delegates heard Sept. 19 [2023].

And, delegates heard from Office of the Ombudsperson of B.C. representatives, decisions made by machines must be fair and transparent.

“This is the way of the future — using AI systems for delivering municipal services,” Zoë Macmillan, office manager of investigations, health and local services.

The risk in getting it wrong on fairness and privacy issues, said Wendy Byrne, office consultation and training officer, is a loss of trust in government.

It’s an issue the office has addressed itself, due to the impacts automated decision-making could have on British Columbians, in terms of the fairness they receive around public services. The issue has been covered in a June 2021 report, Getting Ahead of the Curve [emphasis mine]. The work was done jointly with B.C.’s Office of the Information and Privacy Commissioner.

And, said office representatives, there also needs to be AI decision-making trails that can be audited when it comes to transparency in decision-making and for people appealing decisions made by machines.

She [Zoë Macmillan] said many B.C. communities are on the verge of implementing AI for providing citizens with services. In Vancouver and Kelowna, AI is already being used [emphasis mine] in some permitting systems.

The public, meanwhile, needs to be aware when an automated decision-making system is assisting them with an issue, she [Wendy Byrne] noted.

It’s not clear from Hainsworth’s article excerpts seen here but the report, “Getting Ahead of the Curve” was a joint Yukon and British Columbia (BC) effort. Here’s a link to the report (PDF) and an excerpt, Note: I’d call this an executive summary,

Message from the Officers

With the proliferation of instantaneous and personalized services increasingly being delivered to people in many areas in the private sector, the public is increasingly expecting the same approach when receiving government services. Artificial intelligence (AI) is touted as an effective, efficient and cost-saving solution to these growing expectations. However, ethical and legal concerns are being raised as governments in Canada and abroad are experimenting with AI technologies in
decision-making under inadequate regulation and, at times, in a less than transparent manner.

As public service oversight officials upholding the privacy and fairness rights of citizens, it is our responsibility to be closely acquainted with emerging issues that threaten those rights. There is no timelier an issue that intersects with our
respective mandates as privacy commissioners and ombudsman, than the increasing use of artificial intelligence by the governments and public bodies we oversee.

The digital era has brought swift and significant change to the delivery of public services. The benefits of providing the public with increasingly convenient and timely service has spurred a range of computer-based platforms, from digital assistants to automated systems of approval for a range of services – building permits, inmate releases, social assistance applications, and car insurance premiums [emphasis mine] to name a few. While this kind of machine-based service delivery was once narrowly applied in the public sector, the use of artificial intelligence by the public sector is gaining a stronger foothold in countries around the world, including here in Canada. As public bodies become larger and more complex, the perceived benefits of efficiency, accessibility and accuracy of algorithms to make decisions once made by humans, can be initially challenging to refute.

Fairness and privacy issues resulting from the use of AI are well documented, with many commercial facial recognition systems and assessment tools demonstrating bias and augmenting the ability to use personal information in ways that infringe
privacy interests. Similar privacy and fairness issues are raised by the use of AI in government. People often have no choice but to interact with government and the decisions of government can have serious, long-lasting impacts on our lives. A failure to consider how AI technologies create tension with the fairness and privacy obligations of democratic institutions poses risks for the public and undermines trust in government.

In examining examples of how these algorithms have been used in practice, this report demonstrates that there are serious legal and ethical concerns for public sector administrators. Key privacy concerns relate to the lack of transparency of closed proprietary systems that prove challenging to review, test and monitor. Current privacy laws do not contemplate the use of AI and as such lack obligations for key
imperatives around the collection and use of personal information in machine-based
systems. From a fairness perspective, the use of AI in the public sector challenges key pillars of administrative fairness. For example, how algorithmic decisions are made, explained, reviewed or appealed, and how bias is prevented all present challenging questions.

As the application of AI in public administration continues to gain momentum, the intent of this report is to provide both important context regarding the challenges AI presents in public sector decision-making, as well as practical recommendations that aim to set consistent parameters for transparency, accountability, legality and procedural fairness for AI’s use by public bodies. The critically important values of
privacy protection and administrative fairness cannot be left behind as the field of AI continues to evolve and these principles must be more expressly articulated in legislation, policy and applicable procedural applications moving forward.

This joint report urges governments to respect and fulfill fairness and privacy principles in their adoption of AI technologies. It builds on extensive literature on public sector AI by providing concrete, technology-sensitive, implementable guidance on building fairness and privacy into public sector AI. The report also recommends capacity-building, co-operation and public engagement initiatives government should undertake to promote the public’s trust and buy-in of AI.

This report pinpoints the persistent challenges with AI that merit attention from a fairness and privacy perspective; identifies where existing regulatory measures and instruments for administrative fairness and privacy protection in the age of AI fall short and where they need to be enhanced; and sets out detailed, implementable guidance on incorporating administrative fairness and privacy principles across the various stages of the AI lifecycle, from inception and design, to testing, implementation and mainstreaming.

The final chapter contains our recommendations for the development of a framework to facilitate the responsible use of AI systems by governments. Our recommendations include:

– The need for public authorities to make a public commitment to guiding principles for the use of AI that incorporate transparency, accountability, legality, procedural fairness and the protection of privacy. These principles should apply to all existing and new programs or activities, be included in any tendering documents by public authorities for third-party contracts or AI systems delivered by service providers, and be used to assess legacy projects so they are brought into compliance within a reasonable timeframe.

– The need for public authorities to notify an individual when an AI system is used to make a decision about them and describe to the individual in a way that is understandable how that system operates.

– Government promote capacity building, co-operation, and public engagement on AI.
This should be carried out through public education initiatives, building subject-matter knowledge and expertise on AI across government ministries, developing capacity to support knowledge sharing and expertise between government and AI developers and vendors, and establishing or growing the capacity to develop open-source, high-quality data sets for training and testing Automated Decision Systems (ADS).

– Requiring all public authorities to complete and submit an Artificial Intelligence Fairness and Privacy Impact Assessment (AIFPIA) for all existing and future AI programs for review by the relevant oversight body.

– Special rules or restrictions for the use of highly sensitive information by AI.

… [pp. 1-3]

These are the contributors to the report: Alexander Agnello: Policy Analyst, B.C. Office of the Ombudsperson; Ethan Plato: Policy Analyst, B.C. Office of the Information and Privacy Commissioner; and Sebastian Paauwe: Investigator and Compliance Review Officer, Office of the Yukon Ombudsman and Information and Privacy Commissioner.

A bit startling to see how pervasive ” … automated systems of approval for a range of services – building permits, inmate releases, social assistance applications, and car insurance premiums …” are already. Not sure I’d call this 60 pp. report “Getting Ahead of the Curve” (PDF). It seems more like it was catching up—even in 2021.

Finally, there’s my October 27, 2023 post about the 2023 Canadian Science Policy Conference highlighting a few of the sessions. Scroll down to the second session, “901 – The new challenges of information in parliaments“, where you’ll find this,

… This panel proposes an overview … including a discussion on emerging issues impacting them, such as the integration of artificial intelligence and the risks of digital interference in democratic processes.

Interesting, eh?

Leave a Reply

Your email address will not be published. Required fields are marked *