I stumbled across this June 8, 2022 AMC Networks news release in the last place I was expecting (i.e., a self-described global entertainment company’s website) to see a STEM (science, technology, engineering, and mathematics) announcement,
AMC NETWORKS CONTENT ROOM TEAMS WITH THE AD COUNCIL TO EMPOWER GIRLS IN STEM, FEATURING “THE WALKING DEAD”
AMC Networks Content Room and the Ad Council, a non-profit and leading producer of social impact campaigns for 80 years, announced today a series of new public service advertisements (PSAs) that will highlight the power of girls in STEM (science, technology, engineering and math) against the backdrop of the global hit series “The Walking Dead.” In the spots, behind-the-scenes talent of the popular franchise, including DirectorAisha Tyler, Costume DesignerVera Chow and Art DirectorJasmine Garnet, showcase how STEM is used to bring the post-apocalyptic world of “The Walking Dead” to life on screen. Created by AMC Networks Content Room, the PSAs are part of the Ad Council’s national She Can STEM campaign, which encourages girls, trans youth and non-binary youth around the country to get excited about and interested in STEM.
The new creative consists of TV spots and custom videos created specifically for TikTok and Instagram. The spots also feature Gitanjali Rao, a 16-year-old scientist, inventor and activist, interviewing Tyler, Chow and Garnet discussing how they and their teams use STEM in the production of “The Walking Dead.” Using before and after visuals, each piece highlights the unique and unexpected uses of STEM in the making of the series. In addition to being part of the larger Ad Council campaign, the spots will be available on “The Walking Dead’s” social media platforms, including Facebook, Instagram, Twitter and YouTube pages, and across AMC Networks linear channels and digital platforms.
Said Kim Granito, EVP of AMC Networks Content Room: “We are thrilled to partner with the Ad Council to inspire young girls in STEM through the unexpected backdrop of ‘The Walking Dead.’ Over the last 11 years, this universe has been created by an array of insanely talented women that utilize STEM every day in their roles. This campaign will broaden perceptions of STEM beyond the stereotypes of lab coats and beakers, and hopefully inspire the next generation of talented women in STEM. Aisha Tyler, Vera Chow and Jasmine Garnet were a dream to work with and their shared enthusiasm for this mission is inspiring.”
“Careers in STEM are varied and can touch all aspects of our lives. We are proud to partner with AMC Networks Content Room on this latest work for the She Can STEM campaign. With it, we hope to inspire young girls, non-binary youth, and trans youth to recognize that their passion for STEM can impact countless industries – including the entertainment industry,” said Michelle Hillman, Chief Campaign Development Officer, Ad Council.
Women make up nearly half of the total college-educated workforce in the U.S., but they only constitute 27% of the STEM workforce, according to the U.S. Census Bureau. Research shows that many girls lose interest in STEM as early as middle school, and this path continues through high school and college, ultimately leading to an underrepresentation of women in STEM careers. She Can STEM aims to dismantle the intimidating perceived barrier of STEM fields by showing girls, non-binary youth, and trans youth how fun, messy, diverse and accessible STEM can be, encouraging them to dive in, no matter where they are in their STEM journey.
Since the launch of She Can STEM in September 2018, the campaign has been supported by a variety of corporate, non-profit and media partners. The current funder of the campaign is IF/THEN, an initiative of Lyda Hill Philanthropies. Non-profit partners include Black Girls Code, ChickTech, Girl Scouts of the USA, Girls Inc., Girls Who Code, National Center for Women & Information Technology, The New York Academy of Sciences and Society of Women Engineers.
About AMC Networks Inc.
AMC Networks (Nasdaq: AMCX) is a global entertainment company known for its popular and critically-acclaimed content. Its brands include targeted streaming services AMC+, Acorn TV, Shudder, Sundance Now, ALLBLK, and the newest addition to its targeted streaming portfolio, the anime-focused HIDIVE streaming service, in addition to AMC, BBC AMERICA (operated through a joint venture with BBC Studios), IFC, SundanceTV, WE tv and IFC Films. AMC Studios, the Company’s in-house studio, production and distribution operation, is behind some of the biggest titles and brands known to a global audience, including The Walking Dead, the Anne Rice catalog and the Agatha Christie library. The Company also operates AMC Networks International, its international programming business, and 25/7 Media, its production services business.
About Content Room
Content Room is AMC Networks’ award-winning branded entertainment studio that collaborates with advertising partners to build brand stories and create bespoke experiences across an expanding range of digital, social, and linear platforms. Content Room enables brands to fully tap into the company’s premium programming, distinct IP, deep talent roster and filmmaking roots through an array of creative partnership opportunities— from premium branded content and integrations— to franchise and gaming extensions.
Content Room is also home to the award-winning digital content studio which produces dozens of original series annually, which expands popular AMC Networks scripted programming for both fans and advertising partners by leveraging the built-in massive series and talent fandoms.
The Ad Council The Ad Council is where creativity and causes converge. The non-profit organization brings together the most creative minds in advertising, media, technology and marketing to address many of the nation’s most important causes. The Ad Council has created many of the most iconic campaigns in advertising history. Friends Don’t Let Friends Drive Drunk. Smokey Bear. Love Has No Labels.
The Ad Council’s innovative social good campaigns raise awareness, inspire action and save lives. To learn more, visit AdCouncil.org, follow the Ad Council’s communities on Facebook and Twitter, and view the creative on YouTube.
You can find the ‘She Can Stem’ Ad Council initiative here.
Canadian women and the AI4Good Lab
A June 9, 2022 posting on the Borealis AI website describes an artificial intelligence (AI) initiative designed to encourage women to enter the field,
The AI4Good Lab is one of those programs that creates exponential opportunities. As the leading Canadian AI-training initiative for women-identified STEM students, the lab helps encourage diversity in the field of AI. Participants work together to use AI to solve a social problem, delivering untold benefits to their local communities. And they work shoulder-to-shoulder with other leaders in the field of AI, building their networks and expanding the ecosystem.
At this year’s  AI4Good Lab Industry Night, program partners – like Borealis AI, RBC [Royal Bank of Canada], DeepMind, Ivado and Google – had an opportunity to (virtually) meet the nearly 90 participants of this year’s program. Many of the program’s alumni were also in attendance. So, too, were representatives from CIFAR [Canadian Institute for Advanced Research], one of Canada’s leading global research organizations.
Industry participants – including Dr. Eirene Seiradaki, Director of Research Partnerships at Borealis AI, Carey Mende-Gibson, RBC’s Location Intelligence ambassador, and Lucy Liu, Director of Data Science at RBC – talked with attendees about their experiences in the AI industry, discussed career opportunities and explored various career paths that the participants could take in the industry. For the entire two hours, our three tables and our virtually cozy couches were filled to capacity. It was only after the end of the event that we had the chance to exchange visits to the tables of our partners from CIFAR and AMII [Alberta Machine Intelligence Institute]. Eirene did not miss the opportunity to catch up with our good friend, Warren Johnston, and hear first-hand the news from AMII’s recent AI Week 2022.
Borealis AI is funded by the Royal Bank of Canada. Somebody wrote this for the homepage (presumably tongue in cheek),
The AI4Good Lab is a 7-week program that equips women and people of marginalized genders with the skills to build their own machine learning projects. We emphasize mentorship and curiosity-driven learning to prepare our participants for a career in AI.
The program is designed to open doors for those who have historically been underrepresented in the AI industry. Together, we are building a more inclusive and diverse tech culture in Canada while inspiring the next generation of leaders to use AI as a tool for social good.
A most recent programme ran (May 3 – June 21, 2022) in Montréal, Toronto, and Edmonton.
There are a number of AI for Good initiatives including this one from the International Telecommunications Union (a United Nations Agency).
For the curious, I have a May 10, 2018 post “The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence” where I ‘examine’ RBC and its AI initiatives.
The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.
The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.
The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.
I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,
Keynote speaker: Julia Stoyanovich
New York University
“Building Data Equity Systems”
Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society. In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective. I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.
Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU). Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio. Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic. In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst. She is a recipient of an NSF CAREER award and a Senior Member of the ACM.
Panel on ethical implications of AI
Luke Stark, Faculty of Information and Media Studies, Western University
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.
Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta
Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.
Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto
Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.
Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR
Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI. Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.
Tutorial on AI and the Law
Prof. Maura R. Grossman, University of Waterloo, and
Hon. Paul W. Grimm, United States District Court for the District of Maryland
AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.
How is AI being used in the legal industry today?
What has the legal industry’s reaction been to legal AI applications?
What are some of the biggest legal and ethical issues implicated by legal and other AI applications?
How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?
What considerations go into the trial judge’s decision?
What happens if the judge is not an expert in AI? Do they recuse?
You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.
Virtual Meet and Greet on Responsible AI across Canada
Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.
It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.
The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (email@example.com).
Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,
Responsible AI Co-chairs
Ebrahim Bagheri Professor Electrical, Computer, and Biomedical Engineering, Ryerson University Website
Eleni Stroulia Professor, Department of Computing Science Acting Vice Dean, Faculty of Science Director, AI4Society Signature Area University of Alberta Website
The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.
The CAIAC is almost 50 years old (under various previous names) and has its website here.
*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.
The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,
The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.
As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.
Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.
What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.
In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.
In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.
At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).
The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.
The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?
I’ll get back to that last bit, “… what does it mean to be human?” later.
There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.
In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.
David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.
I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.
In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).
Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.
The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)
In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.
The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.
Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.
Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”
AI and creativity
The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.
Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)
The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.
There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.
What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),
… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI.
This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]
Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]
“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]
So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)
The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),
The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.
[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.
“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”
Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),
You can hear Burroughs talk about the technique and how he started using it in 1959.
There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.
Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?
AI and emotions
The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.
Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.
(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)
While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?
While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).
“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”
This brings the question back to, what is consciousness?
What scientists aren’t taught
Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)
My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.
Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.
The experts, the connections, and the Canadian content
It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.
Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).
I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.
Google invests $4.5 Million in Montreal AI Research
A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].
Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:
Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),
Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.
In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.
“COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.“
– Yoshua Bengio, for Google’s Official Canada Blog
Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.
My hat’s off to Google’s marketing communications and public relations teams.
Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”
Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”
There is this from his LinkedIn profile,
I develop, create and host engaging live experiences & media to foster critical thinking.
I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.
There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)
Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.
Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.
Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.
I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,
Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.
Get ready for Xenobots 2.0.
Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”
Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,
While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.
And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.
Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?
As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.
David Suzuki, where are you?
Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.
Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,
Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks. However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.
Knight was using the term in its humorous, derogatory form.
The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.
To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.
Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.
For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.
*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.
The AICan (Artificial Intelligence Canada) Bulletin is published by CIFAR (Canadian Institute For Advanced Research) and it is the official newsletter for the Pan-Canadian AI Strategy. This is a joint production from CIFAR, Amii (Alberta Machine Intelligence Institute), Mila (Quebec’s Artificial Intelligence research institute) and the Vector Institute for Artificial Intelligence (Toronto, Ontario).
For anyone curious about the Pan-Canadian Artificial Intelligence Strategy, first announced in the 2017 federal budget, I have a March 31, 2017 post which focuses heavily on the, then new, Vector Institute but it also contains information about the artificial intelligence scene in Canada at the time, which is at least in part still relevant today.
The AICan Bulletin October 2021 issue number 16 (The Energy and Environment Issue) is available for viewing here and includes these articles,
The effects of climate change significantly impact our most vulnerable populations. Canada CIFAR AI Chair David Rolnick (Mila) and Tami Vasanthakumaran (Girls Belong Here) share their insights and call to action for the AI research community.
Amii, the University of Alberta, and ISL Engineering explores how machine learning can make water treatment more environmentally friendly and cost-effective with the support of Amii Fellows and Canada CIFAR AI Chairs — Adam White, Martha White and Csaba Szepesvári.
Immerse yourself into this AI-driven virtual experience based on empathy to visualize the impacts of climate change on places you hold dear with Mila.
The bulletin also features AI stories from Canada and the US, as well as, events and job postings.
I found two different pages where you can subscribe. First, there’s this subscription page (which is at the bottom of the October 2021 bulletin and then, there’s this page, which requires more details from you.
I’ve taken a look at the CIFAR website and can’t find any of the previous bulletins on it, which would seem to make subscription the only means of access.
The Canadian Institute for Advanced Research (CIFAR) is investigating the ‘Future of Being Human’ and has instituted a global call for proposals but there is one catch, your team has to have one person (with or without citizenship) who’s living and working in Canada. (Note: I am available.)
New program proposals should explore the long term intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet.
We invite bold proposals from researchers at universities or research institutions that ask new questions about our complex emerging world. We are confronting challenging problems that require a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences [emphasis mine]) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity.
CIFAR is committed to creating a more diverse, equitable, and inclusive environment. We welcome proposals that include individuals from countries and institutions that are not yet represented in our research community.
Here’s a description, albeit, a little repetitive, of what CIFAR is asking researchers to do (from the Program Guide [PDF]),
For CIFAR’s next Global Call for Ideas, we are soliciting proposals related to The Future of Being Human, exploring in the long term the intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the natural world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. We invite bold proposals that ask new questions about our complex emerging world, where the issues under study are entangled and dynamic. We are confronting challenging problems that necessitate a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity. [p. 2 print; p. 4 PDF]
There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).
Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.
This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.
The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.
This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.
Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])
The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.
“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”
As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:
What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?
“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”
The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.
The Expert Panel on AI for Science and Engineering:
Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)
Julien Billot, CEO, Scale AI (Montreal, QC)
Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)
Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)
B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)
Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)
Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)
Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)
Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)
Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)
Who is the expert panel?
Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.
Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).
The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.
(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.
Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.
As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.
Network of relationships
As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.
Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.
Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.
Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.
Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,
Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.
I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),
Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018
.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.
•Affiliated Faculty, History of Art and Architecture, March 2012-
.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]
[all emphases mine]
And these are some of her credentials,
Ph.D., English, Princeton University, 1999. •Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.
M.A., English, Princeton University, 1994.
B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992. •first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major
It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)
Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,
Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.
In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.
In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.
He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.
Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).
Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.
Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.
These kinds of connections are understandable and I have more to say about them in my final comments.
B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,
As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).
Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.
Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City]. In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.
It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.
Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),
Soft-funded Research Bursary
Dr. Teresa Scassa
I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).
Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions
Microsoft Graphic Software Engineer II – Machine Learning Microsoft
Jul 2018 – Present – 2 years 10 months
Machine Learning – Commercial Software Engineering team
Serves on the CSE Responsible AI Board
Founder and Principal Researcher Montreal AI Ethics Institute
May 2018 – Present – 3 years
Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai
Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program U.S. Department of State
Aug 2019 – Present – 1 year 9 months
Selected to represent Canada on the future of work
Responsible AI Lead, Data Advisory Council Northwest Commission on Colleges and Universities
Jun 2020 – Present – 11 months
Faculty Associate, Frankfurt Big Data Lab Goethe University
Mar 2020 – Present – 1 year 2 months
Advisor for the Z-inspection project
Associate Member LF AI Foundation
May 2020 – Present – 1 year
Author MIT Technology Review
Sep 2020 – Present – 8 months
Founding Editorial Board Member, AI and Ethics Journal Springer Nature
Jul 2020 – Present – 10 months
McGill University Bachelor of Science (BS)Computer Science
2012 – 2015
Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.
Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,
Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB. Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.
Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick]. His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]
I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.
(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)
Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),
King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989 for work on developing machine learning methods for protein structure prediction.
King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology. He is probably best known for the Robot Scientist project which has created a robot that can:
hypothesize to explain observations
devise experiments to test these hypotheses
physically run the experiments using laboratory robotics
… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.
… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.
Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),
Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.
Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis) and a Turing Fellow at the Alan Turing Institute [emphases mine] in London. She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences and Associate Editor for the Harvard Data Science Review. She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.
Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.
Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.
Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.
Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.
Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:
The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.
The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.
So, we have the expert panel.
Thoughts about the panel and the report
As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”
Kudos to the Council of Canadian Academies (CCA) organizers.
That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.
There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.
The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.
Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)
I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.
As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.
Anyway, good luck with the report and I look forward to seeing it.
UNESCO in cooperation with Mila-Quebec Artificial Intelligence Institute [?], is launching a Call for Proposals to identify blind spots in AI Policy and Programme Development. The collective work will explore creative, novel and far-reaching approaches to tackling blind spots in AI.
All contributors are invited to answer the same question: what are the blind spots on which we must shed light in order for AI to benefit all?
Issues can address 1) blind spots in the development of AI as a technology 2) blind spots in the development of AI as a sector, and 3) blind spots in the development of public policies, global governance, and regulation for AI. There are no limits to the subjects to be addressed. These blind spots could include issues ranging from science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots. Proposals can be in creative formats, and the call for proposals is open to individuals from all academic backgrounds and sectors. Proposals from all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).
Call for proposals are open until 2 May 2021.
Selected proposals will be confirmed by 25 May.
Final proposals, if in written format, should be between 5000-7000 words and should be written in a style that is accessible to non-AI specialists and received by 1 September 2021.
To ensure inclusivity and a diversity of voices, for accepted contributions outside of academia, authors may request financial support available on a needs-based basis up to 1000 usd.
I really appreciate the breadth of the call with a range of blind spots such as “science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots” and, presumably, anything the convenors had not considered.
As well, they haven’t confined themselves to the ‘same old, same old’ contributors, “all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).”
I’m glad to see a refreshing approach being taken to a call for proposals. I wish them good luck.
The Québec connection
Mila (Montreal Institute for Learning Algorithms), UNESCO’s co-host for this call, was founded in 1993 according to its About Mila page,
Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.
Since 2017, [emphasis mine] Mila is the result of a partnership between the Université de Montréal and McGill University, closely linked with Polytechnique Montréal and HEC Montréal. Today, Mila gathers in its offices a vibrant community of professors, students, industrial partners and startups working in AI, making the institute the world’s largest academic research center in machine learning.
Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.
Unmentioned, the Pan-Canadian Artificial Intelligence (AI) Strategy was created and funded by the Canadian federal government in 2017. One of the beneficiaries was Mila. (Odd how 2017 was the year Mila found so many academic partners in its home province.) From the Pan-Canadian AI strategy webpage on the Invest Canada website (Note: Links have been removed),
The artificial intelligence (AI) and machine learning revolution is well underway, and Canada is at its forefront. From top-ranked educational institutions and market-leading tech companies to world-renowned researchers, Canada’s AI ecosystems are leading global AI developments.
To continue to foster this growth and maintain its leadership position, Canada launched the $125M Pan-Canadian Artificial Intelligence Strategy in 2017—making it the first country to release a national AI strategy.
The Pan-Canadian AI Strategy is founded on a partnership between the Canadian Institute for Advanced Research (CIFAR) and the three centres of excellence: the Alberta Machine Intelligence Institute (AMII) in Edmonton, the Vector Institute in Toronto, and the Montreal Institute for Learning Algorithms (Mila) [all emphases mine] in Montreal. Together, they provide the support, resources, and talent for AI innovation and investment.
I don’t know where “Mila-Quebec Artificial Intelligence Institute” comes from. It’s not on their own website and I’ve never seen Mila called that anywhere other than on this UNESCO call.
These workshops will inform recommendations to the Government of Canada on how to boost public awareness of and foster trust in AI. The conversations will be grounded in an understanding of the technology, its potential uses, and its associated risks.
Each workshop is approximately 2.5 hrs in length and free to attend. Our goal is to engage more than 1,000 people across Canada, building on the results of a national survey that was conducted in December 2020.
What to expect
Opening plenary session (15 min)
Breakout session with 6-10 participants
BREAK (10 minutes)
Recommendations (40 min)
Closing remarks ( 8 min)
Closing plenary Session (22 min)
Oddly, there’s isn’t a registration link from the event page, you have to click on one of two (Regional or Youth) workshop tabs at the top of the page (this is from the Regional Workshops webpage),
Join us for a virtual workshop taking place in your region. Each workshop will include facilitated discussions based on Artificial Intelligence (AI) scenarios and provide an opportunity to share your views on AI.
To register by phone, please call Grace at 416-971-6937. If you require accommodations to participate, please contact firstname.lastname@example.org.
The regions are split into the West (Pacific and Mountain time zones), Central (Central and Ontario time zones), and East (Newfoundland, Atlantic and Quebec time zones). There are French and English sessions in each of the three regions and they have included the North on the regional maps.
Sadly, the events team at CIFAR did not answer questions (I tried twice) nor did Julian Posada who is apparently the facilitator for the workshops,
The Government of Canada’s Advisory Council on Artificial Intelligence Public Awareness Working Group includes representatives from: AI Global | AI Network of BC | Amii | Brookfield Institute | Canadian Chamber of Commerce | CIFAR | DeepSense/Dalhousie | Glassbox | Ivado | Kids Code Jeunesse | Let’s Talk Science | Mila | Saskinteractive | Université de Montréal
The partners, represented by logos, are the Government of Canada (as in Advisory Committee?), Algora Lab, Université de Montréal, CIFAR, and for the Youth Workshops, Let’s Talk Science, Kids Code Jeunesse, and workshop materials are being provided by the Canadian Commission for UNESCO (United Nations Educational, Scientific and Cultural Organization).
By the third time, I’d reworded a few things and added one or two question so, here’s the final list as sent to Julian Posada on Thursday, March 18, 2021,
(1) I understand it’s a joint CIFAR/Government of Canada Advisory Council on Artificial Intelligence Public Awareness Working Group workshop series called Open Dialogue: Artificial Intelligence (AI) in Canada, is that correct and the series will be held from March 30 – April 30, 2021?
(2) Are regular folks invited to join in or is this primarily for academics, business people, entrepreneurs, AI researchers, and other cognoscenti?
(3) Will a distinction be made between AI and robots?
(4) Are you facilitating all of the planned workshops? Will you also have assigned leaders for the breakout groups or will that be decided amongst the participants? If leaders are assigned, who are they?
(5) What do you have planned for your workshop(s)? e.g., Will participants be presented with various scenarios for discussion in the breakout groups? Or will participants be given specific topics to discuss, such as AI in the military? AI in senior’s facilities (e.g., social or companion robots for seniors? etc.
(6) Are the workshops being conducted over Zoom and is a Zoom account required for participation? Is there an alternative technology being used?
(7) Will AI be used to review and analyze the sessions and data gathered?
(8) Are there security measures in place for the session and for the data, specifically, participants’ personal data given up during registration?
(9) Will participants get a copy of the report afterwards or notified when it’s made available?
Since the workshops start on March 30, 2021 and I’m sure everyone’s busy and not able to spare time for questions, I’ve elected to publish what i can about the workshops despite a few misgivings.
I’m glad to see this initiative and to note that the North is included. It would be interesting to learn how these workshops have been publicized (I stumbled across them in a retweet of Julian Posada’s announcement on my Twitter feed). However, it’s not vital.
Priorities for the Advisory Council on Artificial Intelligence
Artificial intelligence (AI) represents a set of complex and powerful technologies that will touch or transform every sector and industry. It has the power to help us address some of our most challenging problems in areas like health and the environment, and to introduce new sources of sustainable economic growth. As a digital nation, Canada is taking steps to harness the potential of AI.
As announced by the Minister of Innovation, Science and Economic Development on May 14, 2019, the Advisory Council on Artificial Intelligence will advise the Government of Canada on building Canada’s strengths and global leadership in AI, identifying opportunities to create economic growth that benefits all Canadians, and ensuring that AI advancements reflect Canadian’ values. The Advisory Council will be a central reference point to draw on leading AI experts from Canadian industry, civil society, academia, and government.
Public Awareness Working Group
Recognizing the importance of a two-way dialogue with the Canadian public on AI, the Advisory Council launched a working group dedicated to public awareness in 2020. The Public Awareness Working Group is looking at mechanisms to boost public awareness and foster trust in AI. It also aims to ground the Canadian discussion in a measured understanding of AI technology, its potential uses, and its associated risks.
Commercialization Working Group
Recognizing that Canada has an imperative to commercialize its AI, and to capitalize on existing Canadian advantages in research and talent, the Advisory Council launched a working group dedicated to commercialization in August 2019 [emphasis mine]. The Commercialization Working Group explored ways to translate Canadian-owned artificial intelligence into economic growth that includes higher business productivity and benefits for Canadians.
The first order of business was commercialization in August 2019 and that’s to be expected given that this is ISED. The Public Awareness Working Group was launched at least four months after.
Is awareness a dialogue?
As they very nicely note on the CIFAR AI dialogue event page, these workshops are going to help the government figure out “how to boost public awareness of and foster trust in AI.” It’s very flattering to be consulted this way.
So to sum this up, the ‘dialogue’ in the regional and youth workshops will be mined for ideas on how to boost public awareness and foster trust. You’re not really just getting an opportunity “to share your views on AI,” are you?
It seems a bit narrow but then they’ve already conducted a survey in December 2020, which has in all likelihood informed the content for these workshops and they have . Plus the workshop materials being provided by the Canadian Commission for UNESCO have in all likelihood been used elsewhere and repackaged for the Canadian market.
Hmmm I wouldn’t call this an ‘open dialogue’ since so much has already been done to frame it.
Many years ago I read a fascinating article about Temple Grandin and her work redesigning abattoirs (slaughterhouses) to make them more humane. I don’t remember much about it but calming the cattle by dampening the noise while distracting them a little by making them move around rather then directly leading them to their deaths seemed the key elements to the redesign.
This ‘open dialogue’ reminds me of the article. The outcome is predetermined and we’re being distracted in the nicest way possible.
Mining the data?
Nine workshop sessions in total with one hour and 40 minutes (rough estimate) of discussion and recommendations for each session. That’s roughly 15 hours of material from the dialogues and recommendations to analyze.
Remember this question “(7) Will AI be used to review and analyze the sessions and data gathered?”
It’s hard to believe that CIFAR and its partners don’t have a system that could do the job or, at the very least, a system that could learn from the sessions.
Not necessarily evil
While I have a number of misgivings about these ‘dialogues’, I don’t expect that most of the people involved are trying to be nefarious. There are probably some good intentions (you know where those take you, yes?) but the overarching purpose here is commercialization which is made much easier with universal acceptance. (awareness + trust)
To be blunt, a dialogue with a predetermined outcome seems more like a script to me than an open conversation.
This sort of thing has been called a ‘public consultation’ but that term has gotten a bad reputation as it was used to disguise the kind of manipulation that I suspect is going on with this effort.
How they expect to foster trust in circumstances that are not conducive to that is a bit of a mystery to me. Plus, I have to wonder if these organizers or committee members have taken into the possible aftereffects of one of the great Canadian government debacles.
The Phoenix pay system is a payroll processing system for Canadian federal government employees, provided by IBM in June 2011 using PeopleSoft software, and run by Public Services and Procurement Canada. The Public Service Pay Centre is located in Miramichi, New Brunswick. It was first introduced in 2009 as part of Prime Minister Stephen Harper’s Transformation of Pay Administration Initiative, intended to replace Canada’s 40-year old system with a new, cost-saving “automated, off-the-shelf commercial system.” By July 2018, Phoenix has caused pay problems to close to 80 percent of the federal government’s 290,000 public servants through underpayments, over-payments, and non-payments. The Standing Senate Committee on National Finance, chaired by Senator Percy Mockler, investigated the Phoenix Pay system and submitted their report, “The Phoenix Pay Problem: Working Towards a Solution” on July 31, 2018, in which they called Phoenix a failure and an “international embarrassment”. Instead of saving $70 million a year as planned, the report said that the cost to taxpayers to fix Phoenix’s problems could reach a total of $2.2 billion by 2023. [emphasis mine]
The entry leaves out a couple of details. Yes, Harper’s government nurtured this disaster but it was (1) Prime Minister Justin Trudeau and his (2) Liberal government who implemented the system in February 2016. Whoever wrote this entry is very friendly to the Liberals so I don’t think the politicians were quite as uninformed as represented in the entry.
As for the cost to taxpayers, I think $2.2 billion by 2023 is an over modest estimate. For comparison, Australia’s Queensland Health Authority also had a pay system debacle. It was the same vendor (IBM) and, in 2013, the estimate to fix the problems was $1.2 billion Australian dollars (see this Dec.11.13 article by Robert N. Charette for the IEEE Spectrum or this Aug.7.13 article by Michael Madigan, Sarah Vogler, and Greg Stolz for The Courier Mail).
Note 1: I checked on a currency converter today (March 23, 2021) and $1 CAD = $1.04 AUS.
Note 2: For anyone unfamiliar with the organization, IEEE is the Institute of Electrical and Electronics Engineers.
I’m pretty sure $2.2 billion (which I think is an underestimate) does not include the human costs (anxiety, alcohol abuse, self-harm, suicide, etc.).
The situation was exacerbated as Catharine Tunney wrote in a February 18, 2020 article for CBC (Canadian Broadcasting Corporation) online (Note: A link has been removed),
More than 69,000 public servants caught up in the Phoenix pay system debacle are now victims of a privacy breach after their personal information was accidentally emailed to the wrong people, says Public Services and Procurement Canada.
The problem-plagued electronic payroll system has improperly paid tens of thousands of public servants since its launch in 2016. Some employees have gone months with little or no pay, while others have been overpaid, sometimes for months at a time.
Earlier this month, a report naming 69,087 public servants was accidentally emailed to the wrong federal departments.
The report included the employees’ full names, their personal record identifier numbers, home addresses and overpayment amounts.
More than 161 chief financial officers and 62 heads of HR in 62 departments received the report in error, according to a statement posted to Public Services and Procurement Canada’s website on Monday.
Public Services and Procurement Canada isn’t the only department to accidentally breach the confidentiality of workers’ personal information.
According to figures recently tabled in the House of Commons, federal departments or agencies mishandled personal information belonging to 144,000 Canadians over the past two years.
Privacy Commissioner Daniel Therrien has long called out “strong indications of systemic under-reporting” of privacy breaches across government.
Overhauling the government payroll system is not the same as introducing new artificial intelligence systems but the problem is that many of the same people in the upper echelons of Canada’s civil service (government employees) were and are instrumental in the deployment of these systems.
“Phoenix pay system an ‘incomprehensible failure,’ Auditor-General says” was the headline for a May 29, 2018 article by Michelle Zilio for the Globe and Mail. I might feel more trust if after the report, there’d been signs that things had changed. However, the government is still highly secretive and we have a ‘dialogue’ with a predetermined outcome (just like the public consultations of yesteryear).
As for M. Posada, the facilitator for one or more of the workshops, he seems relatively new to Canada (scroll down his University of Toronto profile page and click on Degrees),
M.A., Economic Sociology – School for Advanced Studies in the Social Sciences (EHESS) [École des hautes études en sciences sociales in Paris, France]
B.A., Humanities – Sorbonne University [also in Paris]
As I noted in my December 10, 2021 posting where a chapter on science communication in Canada where two of the three authors were from other countries (Brazil and Australia), outsider perspectives can be quite valuable. (Both of the authors spent some time in Canada. At least one of them had taught here.)
In any event, I have to wonder how well he’s been briefed.
After my experience in something called “participatory budgeting” (City of Vancouver, 2019), where citizens were asked come together and decide how to spend $100,000 of the city budget in our neighbourhood, A surprising number of city employees were involved as ‘members’ of the working groups and ,of course, other employees at City Hall had veto power over what was eventually presented to the community for voting. I can say that at the end of the process I felt used.
Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.
Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.
Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.
“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”
The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.
“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”
The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.
For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”
At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.
“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”
“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.
This image illustrates the interplay between the various level dynamics,
Here’s a link, to and a citation for the special issue,
An AI governance publication from the US’s Wilson Center
Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,
Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg
In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:
AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.
However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.
Canadian government and AI
The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.
There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)
Responsible use? Maybe not after 2019
First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?
For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?
What about the government’s digital service?
You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,
In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.
At the time, Simon was Director of Outreach at Code for Canada.
Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.
Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,
Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.
At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.
How it works
We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.
Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.
Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.
Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.
As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)
Does the Treasury Board of Canada have charge of responsible AI use?
I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.
The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.
I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.
But isn’t there a Chief Information Officer for Canada?
Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,
Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.
“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.
He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.
He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]
Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),
Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.
The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.
Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.
Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.
Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”
Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.
Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?
I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.
The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.
The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,
Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.
And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.
Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.
These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.
While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.
Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.
Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?
Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.
When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.
Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.
Instead, the Phoenix Pay system currently employs about 2,300. This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.
… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].
Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.
The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.
Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).
After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:
Insights and predictive modelling
PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.
I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,
Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.
Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.
To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.
In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.
CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.
The objectives of the strategy are to:
Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.
Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.
Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.
Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.
Responsible AI at CIFAR
You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,
CIFAR is leading global conversations about AI’s impact on society.
The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.
Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.
I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.
Final comments about Responsible AI in Canada and the new reports
I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.
I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.
The great unwashed
What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.
I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.
Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen
Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.
The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,
The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.
Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.
In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.
Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.
Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.
Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”
Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”
The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.
Do* we really need senior government bureaucrats?
I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,
When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19
As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.
With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.
“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”
Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”
It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.
Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.
By late February , Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.
“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”
China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”
It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.
But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.
The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.
However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.
The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July , are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.
Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.
Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.
Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.
If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.
The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.
If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,
Responsible AI, eh?
Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.
Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.
Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.
A lot of mistakes have been made but we also do make a lot of good decisions.
What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.
“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”
Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.
“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”
Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.
Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th  for both Summer Institute and Summer School participants.
Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?
One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.