Government of Canada launches Advisory Panel on the Federal Research Support System
Members to recommend enhancements to system to position Canadian researchers for success
October 6, 2022 – Ottawa, Ontario
Canada’s success is in large part due to our world-class researchers and their teams who are globally recognized for unleashing bold new ideas, driving technological breakthroughs and addressing complex societal challenges. The Government of Canada recognizes that for Canada to achieve its full potential, support for science and research must evolve as Canadians push beyond what is currently imaginable and continue to find Canadian-made solutions to the world’s toughest problems.
Today [October 6, 2022], the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, and the Honourable Jean-Yves Duclos, Minister of Health, launched the Advisory Panel on the Federal Research Support System. Benefiting from the insights of leaders in the science, research and innovation ecosystem, the panel will provide independent, expert policy advice on the structure, governance and management of the federal system supporting research and talent. This will ensure that Canadian researchers are positioned for even more success now and in the future.
As the COVID-19 pandemic and climate crisis have shown, addressing the world’s most pressing challenges requires greater collaboration within the Canadian research community, government and industry, as well as with the international community. A cohesive and agile research support system will ensure Canadian researchers can quickly and effectively respond to the questions of today and tomorrow. Optimizing Canada’s research support system will equip researchers to transcend disciplines and borders, seize new opportunities and be responsive to emerging needs and interests to improve Canadians’ health, well-being and prosperity.
Quotes
“Canada is known for world-class research thanks to the enormous capabilities of our researchers. Canadian researchers transform curiosity into bold new ideas that can significantly enhance Canadians’ lives and well-being. With this advisory panel, our government will ensure our support for their research is just as cutting-edge as Canada’s science and research community.” – The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry
“Our priority is to support Canada’s world-class scientific community so it can respond effectively to the challenges of today and the future. That’s why we are leveraging the expertise and perspectives of a newly formed advisory panel to maximize the impact of research and downstream innovation, which contributes significantly to Canadians’ well-being and prosperity.” – The Honourable Jean-Yves Duclos, Minister of Health
Quick facts
The Advisory Panel on the Federal Research Support System has seven members, including the Chair. The members were selected by the Minister of Innovation, Science and Industry and the Minister of Health. The panel will consult with experts and stakeholders to draw on their diverse experiences, expertise and opinions.
Since 2016, the Government of Canada has committed more than $14 billion to support research and science across Canada.
Here’s a list of advisory panel members I’ve assembled from the Advisory Panel on the Federal Research Support System: Member biographies webpage,
Frédéric Bouchard (Chair) is Dean of the Faculty of Arts and Sciences at the Université de Montréal, where he has been a professor of philosophy of science since 2005.
Janet Rossant is a Senior Scientist Emeritus in the Developmental and Stem Cell Biology Program, the Hospital for Sick Children and a Professor Emeritus at the University of Toronto’s Department of Molecular Genetics.
[Gilles Patry] is Professor Emeritus and President Emeritus at the University of Ottawa. Following a distinguished career as a consulting engineer, researcher and university administrator, Gilles Patry is now a consultant and board director [Royal Canadian Mint].
Yolande E. Chan joined McGill University’s Desautels Faculty of Management as Dean and James McGill Professor in 2021. Her research focuses on innovation, knowledge strategy, digital strategy, digital entrepreneurship, and business-IT alignment.
Laurel Schafer is a Professor at the Department of Chemistry at the University of British Columbia. Her research focuses on developing novel organometallic catalysts to carry out difficult transformations in small molecule organic chemistry.
Vianne Timmons is the President and Vice-Chancellor of Memorial University of Newfoundland since 2020. She is a nationally and internationally recognized researcher and advocate in the field of inclusive education.
Dr. Baljit Singh is a highly accomplished researcher, … . He began his role as Vice-President Research at the University of Saskatchewan in 2021, after serving as Dean of the University of Calgary Faculty of Veterinary Medicine (2016 – 2020), and as Associate Dean of Research at the Western College of Veterinary Medicine at the University of Saskatchewan (2010 – 2016).
Nobody from the North. Nobody who’s worked there or lived there or researched there. It’s not the first time I’ve noticed a lack of representation for the North.
Canada’s golden triangle (Montréal, Toronto, Ottawa) is well represented and, as is often the case, there’s representation for other regions: one member from the Prairies, one member from the Maritimes or Atlantic provinces, and one member from the West.
The mandate indicates they could have five to eight members. With seven spots filled, they could include one more member, one from the North.
Even if they don’t add an eighth member, I’m not ready to abandon all hope for involvement from the North when there’s this, from the mandate,
Communications and deliverables
In pursuing its mandate, and to strengthen its advice, the panel may engage with experts and stakeholders to expand access [emphasis mine] to diverse experience, expertise and opinion, and enhance members’ understanding of the topics at hand.
To allow for frank and open discussion, internal panel deliberations among members will be closed.
The panel will deliver a final confidential report by December 2022 [emphasis mine] to the Ministers including recommendations and considerations regarding the modernization of the research support system. A summary of the panel’s observations on the state of the federal research support system may be made public once its deliberations have concluded. The Ministers may also choose to seek confidential advice and/or feedback from the panel on other issues related to the research system.
The panel may also be asked to deliver an interim confidential report to the Ministers by November 2022 [emphases mine], which will provide the panel’s preliminary observations up to that point.
it seems odd there’s no mention of the Pan-Canadian Artificial Intelligence Strategy. It’s my understanding that the funding goes directly from the federal government to the Canadian Institute for Advanced Research (CIFAR), which then distributes the funds. There are other unmentioned science funding agencies, e.g., the National Research Council of Canada and Genome Canada, which (as far as I know) also receive direct funding. It seems that the panel will not be involved in a comprehensive review of Canada’s research support ecosystem.
Plus, I wonder why everything is being kept ‘confidential’. According the government news release, the panel is tasked with finding ways of “optimizing Canada’s research support system.” Do they have security concerns or is this a temporary state of affairs while the government analysts examine the panel’s report?
The Canadian Science Policy Centre (CSPC) in a September 15, 2022 announcement (received via email) announced an event (Age of AI and Big Data – Impact on Justice, Human Rights and Privacy) centered on some of the latest government doings on artificial intelligence and privacy (Bill C-27),
In an increasingly connected world, we share a large amount of our data in our daily lives without our knowledge while browsing online, traveling, shopping, etc. More and more companies are collecting our data and using it to create algorithms or AI. The use of our data against us is becoming more and more common. The algorithms used may often be discriminatory against racial minorities and marginalized people.
As technology moves at a high pace, we have started to incorporate many of these technologies into our daily lives without understanding its consequences. These technologies have enormous impacts on our very own identity and collectively on civil society and democracy.
Recently, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) and Bill C-27 [which includes three acts in total] in parliament regulating the use of AI in our society. In this panel, we will discuss how our AI and Big data is affecting us and its impact on society, and how the new regulations affect us.
For some reason, there was no information about the moderator and panelists, other than their names, titles, and affiliations. Here’s a bit more:
Moderator: Yuan Stevens (from her eponymous website’s About page), Note: Links have been removed,
Yuan (“You-anne”) Stevens (she/they) is a legal and policy expert focused on sociotechnical security and human rights.
She works towards a world where powerful actors—and the systems they build—are held accountable to the public, especially when it comes to marginalized communities.
She brings years of international experience to her role at the Leadership Lab at Toronto Metropolitan University [formerly Ryerson University], having examined the impacts of technology on vulnerable populations in Canada, the US and Germany.
Committed to publicly accessible legal and technical knowledge, Yuan has written for popular media outlets such as the Toronto Star and Ottawa Citizen and has been quoted in news stories by the New York Times, the CBC and the Globe & Mail.
Yuan is a research fellow at the Centre for Law, Technology and Society at the University of Ottawa and a research affiliate at Data & Society Research Institute. She previously worked at Harvard University’s Berkman Klein Center for Internet & Society during her studies in law at McGill University.
She has been conducting research on artificial intelligence since 2017 and is currently exploring sociotechnical security as an LL.M candidate at University of Ottawa’s Faculty of Law working under Florian Martin-Bariteau.
Brenda McPhail is the director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project. Her recent work includes guiding the Canadian Civil Liberties Association’s interventions in key court cases that raise privacy issues, most recently at the Supreme Court of Canada in R v. Marakah and R v. Jones, which focused on privacy rights in sent text messages; research into surveillance of dissent, government information sharing, digital surveillance capabilities and privacy in relation to emergent technologies; and developing resources and presentations to drive public awareness about the importance of privacy as a social good.
My research has spanned many areas such as resource allocation in networking, smart grids, social information networks, machine learning. Broadly, my interest lies in gaining a fundamental understanding of a given system and the design of robust algorithms.
More recently my research focus has been in privacy in machine learning. I’m interested in understanding how robust machine learning methods are to perturbation, and privacy and fairness constraints, with the goal of designing practical algorithms that achieve privacy and fairness.
Bio
Before joining the University of Alberta, I spent many years in industry research labs. Most recently, I was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where my team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, I spent many years in research labs in Europe working on a variety of interesting and impactful problems. I was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where I led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. I also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, and privacy in recommendations.
Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute (RAII) [headquarted in Austin, Texas]. Currently, he is developing their Responsible AI Certification Program and leading it through Canada’s national accreditation process. Over the last several years, he has worked on numerous certification program-related research projects such as fishery economics and certification programs, police body-worn camera policy certification, and emerging AI certifications and assurance systems. Before his work at RAII, Benjamin completed a Master of Public Policy and Administration at Carleton University, where he was a Canada Graduate Scholar, Ontario Graduate Scholar, Social Innovation Fellow, and Visiting Scholar at UC Davis School of Law. He holds undergraduate degrees in criminology and psychology, finishing both with first class standing. Outside of work, Benjamin reads about how and why certification and private governance have been applied across various industries.
Panelist: Ori Freiman (from his eponymous website’s About page)
I research at the forefront of technological innovation. This website documents some of my academic activities.
My formal background is in Analytic Philosophy, Library and Information Science, and Science & Technology Studies. Until September 22′ [September 2022], I was a Post-Doctoral Fellow at the Ethics of AI Lab, at the University of Toronto’s Centre for Ethics. Before joining the Centre, I submitted my dissertation, about trust in technology, to The Graduate Program in Science, Technology and Society at Bar-Ilan University.
I have also found a number of overviews and bits of commentary about the Canadian federal government’s proposed Bill C-27, which I think of as an omnibus bill as it includes three proposed Acts.
The lawyers are excited but I’m starting with the Responsible AI Institute’s (RAII) response first as one of the panelists (Benjamin Faveri) works for them and it’s a view from a closely neighbouring country, from a June 22, 2022 RAII news release, Note: Links have been removed,
Business Implications of Canada’s Draft AI and Data Act
On June 16 [2022], the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), as part of the broader Digital Charter Implementation Act 2022 (Bill C-27). Shortly thereafter, it also launched the second phase of the Pan-Canadian Artificial Intelligence Strategy.
Both RAII’s Certification Program, which is currently under review by the Standards Council of Canada, and the proposed AIDA legislation adopt the same approach of gauging an AI system’s risk level in context; identifying, assessing, and mitigating risks both pre-deployment and on an ongoing basis; and pursuing objectives such as safety, fairness, consumer protection, and plain-language notification and explanation.
Businesses should monitor the progress of Bill C-27 and align their AI governance processes, policies, and controls to its requirements. Businesses participating in RAII’s Certification Program will already be aware of requirements, such as internal Algorithmic Impact Assessments to gauge risk level and Responsible AI Management Plans for each AI system, which include system documentation, mitigation measures, monitoring requirements, and internal approvals.
…
The AIDA draft is focused on the impact of any “high-impact system”. Companies would need to assess whether their AI systems are high-impact; identify, assess, and mitigate potential harms and biases flowing from high-impact systems; and “publish on a publicly available website a plain-language description of the system” if making a high-impact system available for use. The government elaborated in a press briefing that it will describe in future regulations the classes of AI systems that may have high impact.
The AIDA draft also outlines clear criminal penalties for entities which, in their AI efforts, possess or use unlawfully obtained personal information or knowingly make available for use an AI system that causes serious harm or defrauds the public and causes substantial economic loss to an individual.
If enacted, AIDA would establish the Office of the AI and Data Commissioner, to support Canada’s Minister of Innovation, Science and Economic Development, with powers to monitor company compliance with the AIDA, to order independent audits of companies’ AI activities, and to register compliance orders with courts. The Commissioner would also help the Minister ensure that standards for AI systems are aligned with international standards.
…
Apart from being aligned with the approach and requirements of Canada’s proposed AIDA legislation, RAII is also playing a key role in the Standards Council of Canada’s AI accreditation pilot. The second phase of the Pan-Canadian includes funding for the Standards Council of Canada to “advance the development and adoption of standards and a conformity assessment program related to AI/”
The AIDA’s introduction shows that while Canada is serious about governing AI systems, its approach to AI governance is flexible and designed to evolve as the landscape changes.
Charles Mandel’s June 16, 2022 article for Betakit (Canadian Startup News and Tech Innovation) provides an overview of the government’s overall approach to data privacy, AI, and more,
The federal Liberal government has taken another crack at legislating privacy with the introduction of Bill C-27 in the House of Commons.
Among the bill’s highlights are new protections for minors as well as Canada’s first law regulating the development and deployment of high-impact AI systems.
“It [Bill C-27] will address broader concerns that have been expressed since the tabling of a previous proposal, which did not become law,” a government official told a media technical briefing on the proposed legislation.
François-Philippe Champagne, the Minister of Innovation, Science and Industry, together with David Lametti, the Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022. The ministers said Bill C-27 will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue to put in place Canada’s Digital Charter.
The Digital Charter Implementation Act includes three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act (AIDA)- all of which have implications for Canadian businesses.
Bill C-27 follows an attempt by the Liberals to introduce Bill C-11 in 2020. The latter was the federal government’s attempt to reform privacy laws in Canada, but it failed to gain passage in Parliament after the then-federal privacy commissioner criticized the bill.
The proposed Artificial Intelligence and Data Act is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias.
For businesses developing or implementing AI this means that the act will outline criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.
..
An AI and data commissioner will support the minister of innovation, science, and industry in ensuring companies comply with the act. The commissioner will be responsible for monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate.
The commissioner would also be expected to outline clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.
…
Canada already collaborates on AI standards to some extent with a number of countries. Canada, France, and 13 other countries launched an international AI partnership to guide policy development and “responsible adoption” in 2020.
The federal government also has the Pan-Canadian Artificial Intelligence Strategy for which it committed an additional $443.8 million over 10 years in Budget 2021. Ahead of the 2022 budget, Trudeau [Canadian Prime Minister Justin Trudeau] had laid out an extensive list of priorities for the innovation sector, including tasking Champagne with launching or expanding national strategy on AI, among other things.
Within the AI community, companies and groups have been looking at AI ethics for some time. Scotiabank donated $750,000 in funding to the University of Ottawa in 2020 to launch a new initiative to identify solutions to issues related to ethical AI and technology development. And Richard Zemel, co-founder of the Vector Institute [formed as part of the Pan-Canadian Artificial Intelligence Strategy], joined Integrate.AI as an advisor in 2018 to help the startup explore privacy and fairness in AI.
When it comes to the Consumer Privacy Protection Act, the Liberals said the proposed act responds to feedback received on the proposed legislation, and is meant to ensure that the privacy of Canadians will be protected, and that businesses can benefit from clear rules as technology continues to evolve.
“A reformed privacy law will establish special status for the information of minors so that they receive heightened protection under the new law,” a federal government spokesperson told the technical briefing.
..
The act is meant to provide greater controls over Canadians’ personal information, including how it is handled by organizations as well as giving Canadians the freedom to move their information from one organization to another in a secure manner.
The act puts the onus on organizations to develop and maintain a privacy management program that includes the policies, practices and procedures put in place to fulfill obligations under the act. That includes the protection of personal information, how requests for information and complaints are received and dealt with, and the development of materials to explain an organization’s policies and procedures.
The bill also ensures that Canadians can request that their information be deleted from organizations.
The bill provides the privacy commissioner of Canada with broad powers, including the ability to order a company to stop collecting data or using personal information. The commissioner will be able to levy significant fines for non-compliant organizations—with fines of up to five percent of global revenue or $25 million, whichever is greater, for the most serious offences.
The proposed Personal Information and Data Protection Tribunal Act will create a new tribunal to enforce the Consumer Privacy Protection Act.
Although the Liberal government said it engaged with stakeholders for Bill C-27, the Council of Canadian Innovators (CCI) expressed reservations about the process. Nick Schiavo, CCI’s director of federal affairs, said it had concerns over the last version of privacy legislation, and had hoped to present those concerns when the bill was studied at committee, but the previous bill died before that could happen.
…
Now the lawyers. Simon Hodgett, Kuljit Bhogal, and Sam Ip have written a June 27, 2022 overview, which highlights the key features from the perspective of Osler, a leading business law firm practising internationally from offices across Canada and in New York.
Maya Medeiros and Jesse Beatson authored a June 23, 2022 article for Norton Rose Fulbright, a global law firm, which notes a few ‘weak’ spots in the proposed legislation,
…
… While the AIDA is directed to “high-impact” systems and prohibits “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties will be fixed only upon the issuance of regulations.
Moreover, the AIDA sets out publication requirements but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity should come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill passes.
The AIDA may have extraterritorial application if components of global AI systems are used, developed, designed or managed in Canada. The European Union recently introduced its Artificial Intelligence Act, which also has some extraterritorial application. Other countries will likely follow. Multi-national companies should develop a coordinated global compliance program.
…
I have two podcasts from Michael Geist, a lawyer and Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa.
June 26, 2022: The Law Bytes Podcast, Episode 132: Ryan Black on the Government’s Latest Attempt at Privacy Law Reform “The privacy reform bill that is really three bills in one: a reform of PIPEDA, a bill to create a new privacy tribunal, and an artificial intelligence regulation bill. What’s in the bill from a privacy perspective and what’s changed? Is this bill any likelier to become law than an earlier bill that failed to even advance to committee hearings? To help sort through the privacy aspects of Bill C-27, Ryan Black, a Vancouver-based partner with the law firm DLA Piper (Canada) …” (about 45 mins.)
August 15, 2022: The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act “Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society …” (about 38 mins.)
This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.
Music + AI Reading Group @ Mila x Vector Institute
Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),
Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting.
Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.
Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,
…
unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨
Dear members of the ISMIR [International Society for Music Information Retrieval] Community,
Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning.
During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to : – Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute. – People share research ideas and brainstorm with others. – Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.
Our topics of interest cover (beware : the list is not exhaustive !) : 🎹 Music Generation 🧠 Music Understanding 📇 Music Recommendation 🗣 Source Separation and Instrument Recognition 🎛 Acoustics 🗿 Digital Humanities … 🙌 … and more (we are waiting for you :]) !
— If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group. —
Bravo to the two student organizers for putting this together!
Calliope Composition Environment for music makers
From the August 10, 2022 Metacreation Lab announcement,
Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and .
Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.
The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.
You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.
3rd Conference on AI Music Creativity (AIMC 2022)
This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,
Registration has opened for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events.
The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.
Autolume Live
This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”
Here’s more from the August 10, 2022 Metacreation Lab announcement,
Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals.
While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.
As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,
Due to the 2020-2022 situation surrounding COVID-19, we were unable to use our system to accompany live performances. We have used different iterations of Autolume-Live to create two installations. We recorded some curated sessions and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show that the system on its own is already able to generate interesting and responsive visuals for a musical piece.
For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.
…
I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),
…
The artwork is called ‘Autolume Acedia’.
“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”
Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.
…
These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]
You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.
Movement and the Metacreation Lab
Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,
…
As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.
…
While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,
MITACS Accelerate award – partnership with Kinetyx
We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award.
The project will focus on body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.
Movement Database – MoDa
On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi.
Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.
MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.
As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),
Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.
…
We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.
…
We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change.
[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.
[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)
“… make movement mean something … .” Really?
The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,
An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022
The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.
The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.
The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.
I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,
Keynote speaker: Julia Stoyanovich
New York University
“Building Data Equity Systems”
Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society. In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective. I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.
Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU). Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle. She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio. Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic. In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst. She is a recipient of an NSF CAREER award and a Senior Member of the ACM.
Panel on ethical implications of AI
Panelists
Luke Stark, Faculty of Information and Media Studies, Western University
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.
Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta
Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.
Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto
Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.
Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR
Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI. Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.
Tutorial on AI and the Law
Prof. Maura R. Grossman, University of Waterloo, and
Hon. Paul W. Grimm, United States District Court for the District of Maryland
AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.
How is AI being used in the legal industry today?
What has the legal industry’s reaction been to legal AI applications?
What are some of the biggest legal and ethical issues implicated by legal and other AI applications?
How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?
What considerations go into the trial judge’s decision?
What happens if the judge is not an expert in AI? Do they recuse?
You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.
Virtual Meet and Greet on Responsible AI across Canada
Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.
It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.
The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).
Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,
Responsible AI Co-chairs
Ebrahim Bagheri Professor Electrical, Computer, and Biomedical Engineering, Ryerson University Website
Eleni Stroulia Professor, Department of Computing Science Acting Vice Dean, Faculty of Science Director, AI4Society Signature Area University of Alberta Website
The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.
The CAIAC is almost 50 years old (under various previous names) and has its website here.
*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.
Here’s my more or less annual commentary on the newly announced federal budget. This year the 2022/23 Canadian federal budget was presented by Chrystia Freeland, Minister of Finance, on April 7, 2022.
Sadly the budgets never include a section devoted to science and technology, which makes finding the information a hunting exercise.
$8 billion to transform and decarbonize industry and invest in clean technologies and batteries;
$4 billion for the Canada Digital Adoption Program, which launched in March 2022 to help businesses move online, boost their e-commerce presence, and digitalize their businesses;
$1.2 billion to support life sciences and bio-manufacturing in Canada, including investments in clinical trials, bio-medical research, and research infrastructure;
…
$1 billion to the Strategic Innovation Fund to support life sciences and bio-manufacturing firms in Canada and develop more resilient supply chains. This builds on investments made throughout the pandemic with manufacturers of vaccines and therapeutics like Sanofi, Medicago, and Moderna;
…
…
$1 billion for the Universal Broadband Fund (UBF), bringing the total available through the UBF to $2.75 billion, to improve high-speed Internet access and support economic development in rural and remote areas of Canada;
…
$1.2 billion to launch the National Quantum Strategy, Pan-Canadian Genomics Strategy, and the next phase of Canada’s Pan-Canadian Artificial Intelligence Strategy to capitalize on emerging technologies of the future [Please see: the ‘I am confused’ subhead for more about the ‘launches’];
…
Helping small and medium-sized businesses to invest in new technologies and capital projects by allowing for the immediate expensing of up to $1.5 million of eligible investments beginning in 2021;
…
…
While there are proposed investments in digital adoption and the Universal Broadband Fund, there’s no mention of 5G but perhaps that’s too granular (or specific) for a national budget. I wonder if we’re catching up yet? There have been concerns about our failure to keep pace with telecommunications developments and infrastructure internationally.
Moving on from ‘Key Ongoing Actions’, there are these propositions from Chapter 2: A Strong, Growing, and Resilient Economy (Note: I have not offset the material from the budget in a ‘quote’ form as I want to retain the formatting.),
Creating a Canadian Innovation and Investment Agency
Canadians are a talented, creative, and inventive people. Our country has never been short on good ideas.
But to grow our economy, invention is not enough. Canadians and Canadian companies need to take their new ideas and new technologies and turn them into new products, services, and growing businesses.
However, Canada currently ranks last in the G7 in R&D spending by businesses. This trend has to change. [Note: We’ve been lagging from at least 10 or more years and we keep talking about catching up.]
Solving Canada’s main innovation challenges—a low rate of private business investment in research, development, and the uptake of new technologies—is key to growing our economy and creating good jobs.
A market-oriented innovation and investment agency—one with private sector leadership and expertise—has helped countries like Finland and Israel transform themselves into global innovation leaders. {Note: The 2021 budget also name checked Israel.]
The Israel Innovation Authority has spurred the growth of R&D-intensive sectors, like the information and communications technology and autonomous vehicle sectors. The Finnish TEKES [Tekes – The Finnish Funding Agency for Technology and Innovation] helped transform low-technology sectors like forestry and mining into high technology, prosperous, and globally competitive industries.
In Canada, a new innovation and investment agency will proactively work with new and established Canadian industries and businesses to help them make the investments they need to innovate, grow, create jobs, and be competitive in the changing global economy.
Budget 2022 announces the government’s intention to create an operationally independent federal innovation and investment agency, and proposes $1 billion over five years, starting in 2022-23, to support its initial operations. Final details on the agency’s operating budget are to be determined following further consultation later this year.
…
Review of Tax Support to R&D and Intellectual Property
The Scientific Research and Experimental Development (SR&ED) program provides tax incentives to encourage Canadian businesses of all sizes and in all sectors to conduct R&D. The SR&ED program has been a cornerstone of Canada’s innovation strategy. The government intends to undertake a review of the program, first to ensure that it is effective in encouraging R&D that benefits Canada, and second to explore opportunities to modernize and simplify it. Specifically, the review will examine whether changes to eligibility criteria would be warranted to ensure adequacy of support and improve overall program efficiency.
As part of this review, the government will also consider whether the tax system can play a role in encouraging the development and retention of intellectual property stemming from R&D conducted in Canada. In particular, the government will consider, and seek views on, the suitability of adopting a patent box regime [emphasis mine] in order to meet these objectives.
…
I am confused
Let’s start with the 2022 budget’s $1.2 billion to launch the National Quantum Strategy, Pan-Canadian Genomics Strategy, and the next phase of Canada’s Pan-Canadian Artificial Intelligence Strategy. Here’s what I had in my May 4, 2021 posting about the 2021 budget,
Budget 2021 proposes to provide $360 million over seven years, starting in 2021-22, to launch a National Quantum Strategy [emphasis mine]. The strategy will amplify Canada’s significant strength in quantum research; grow our quantum-ready technologies, companies, and talent; and solidify Canada’s global leadership in this area. This funding will also establish a secretariat at the Department of Innovation, Science and Economic Development to coordinate this work.
Budget 2021 proposes to provide $400 million over six years, starting in 2021-22, in support of a Pan-Canadian Genomics Strategy [emphasis mine]. This funding would provide $136.7 million over five years, starting in 2022-23, for mission-driven programming delivered by Genome Canada to kick-start the new Strategy and complement the government’s existing genomics research and innovation programming.
Budget 2021 proposes to provide up to $443.8 million over ten years, starting in 2021-22, in support of the Pan-Canadian Artificial Intelligence Strategy [emphasis mine], …
How many times can you ‘launch’ a strategy?
A patent box regime
So the government is “… encouraging the development and retention of intellectual property stemming from R&D conducted in Canada” and is examining a “patent box regime” with an eye as to how that will help achieve those ends. Interesting!
Here’s how the patent box is described on Wikipedia (Note: Links have been removed),
A patent box is a special very low corporate tax regime used by several countries to incentivise research and development by taxing patent revenues differently from other commercial revenues.[1] It is also known as intellectual property box regime, innovation box or IP box. Patent boxes have also been used as base erosion and profit shifting (BEPS) tools, to avoid corporate taxes.
…
Even if they can find a way to “incentivize” R&D, the government has a problem keeping research in the country (see my September 17, 2021 posting (about the Council of Academies CCA’s ‘Public Safety in the Digital Age’ project) and scroll down about 50% of the way to find this,
…
There appears to be at least one other major security breach; that involving Canada’s only level four laboratory, the Winnipeg-based National Microbiology Lab (NML). (See a June 10, 2021 article by Karen Pauls for Canadian Broadcasting Corporation news online for more details.)
As far as I’m aware, Ortis [very senior civilian RCMP intelligence official Cameron Ortis] is still being held with a trial date scheduled for September 2022 (see Catherine Tunney’s April 9, 2021 article for CBC news online) and, to date, there have been no charges laid in the Winnipeg lab case.
…
The “security breach” involved sending information and sample viruses to another country, without proper documentation or approvals.
While I delved into a particular aspect of public safety in my posting, the CCA’s ‘Public Safety in the Digital Age’ project was very loosely defined and no mention was made of intellectual property. (You can check the “Exactly how did the question get framed?” subheading in the September 17, 2021 posting.)
Research security
While it might be described as ‘shutting the barn door after the horse got out’, there is provision in the 2022 budget for security vis-à-vis our research, from Chapter 2: A Strong, Growing, and Resilient Economy,
Securing Canada’s Research from Foreign Threats
Canadian research and intellectual property can be an attractive target for foreign intelligence agencies looking to advance their own economic, military, or strategic interests. The National Security Guidelines for Research Partnerships, developed in collaboration with the Government of Canada– Universities Working Group in July 2021, help to protect federally funded research.
To implement these guidelines fully, Budget 2022 proposes to provide $159.6 million, starting in 2022-23, and $33.4 million ongoing, as follows:
$125 million over five years, starting in 2022-23, and $25 million ongoing, for the Research Support Fund to build capacity within post- secondary institutions to identify, assess, and mitigate potential risks to research security; and
$34.6 million over five years, starting in 2022-23, and $8.4 million ongoing, to enhance Canada’s ability to protect our research, and to establish a Research Security Centre that will provide advice and guidance directly to research institutions.
Canada’s Critical Minerals and Clean Industrial Strategies
Critical minerals are central to major global industries like clean technology, health care, aerospace, and computing. They are used in phones, computers, and in our cars. [emphases mine] They are already essential to the global economy and will continue to be in even greater demand in the years to come.
Canada has an abundance of a number of valuable critical minerals, but we need to make significant investments to make the most of these resources.
In Budget 2022, the federal government intends to make significant investments that would focus on priority critical mineral deposits, while working closely with affected Indigenous groups and through established regulatory processes. These investments will contribute to the development of a domestic zero-emissions vehicle value chain, including batteries, permanent magnets, and other electric vehicle components. They will also secure Canada’s place in important supply chains with our allies and implement a just and sustainable Critical Minerals Strategy.
In total, Budget 2022 proposes to provide up to $3.8 billion in support over eight years, on a cash basis, starting in 2022-23, to implement Canada’s first Critical Minerals Strategy. This will create thousands of good jobs, grow our economy, and make Canada a vital part of the growing global critical minerals industry.
…
I don’t recall seeing mining being singled out before and I’m glad to see it now.
A 2022 federal budget commentary from University Affairs
Hannah Liddle’s April 8, 2022 article for University Affairs is focused largely on the budget’s impact on scientific research and she picked up on a few things I missed,
Budget 2022 largely focuses on housing affordability, clean growth and defence, with few targeted investments in scientific research.
…
The government tabled $1 billion over five years for an innovation and investment agency, designed to boost private sector investments in research and development, and to correct the slow uptake of new technologies across Canadian industries. The new agency represents a “huge evolution” in federal thinking about innovation, according to Higher Education Strategy Associates. The company noted in a budget commentary that Ottawa has shifted to solving the problem of low spending on research and development by working with the private sector, rather than funding universities as an alternative. The budget also indicated that the innovation and investment agency will support the defence sector and boost defence manufacturing, but the promised Canada Advanced Research Projects Agency – which was to be modelled after the famed American DARPA program – was conspicuously missing from the budget. [emphases mine]
However, the superclusters were mentioned and have been rebranded [emphasis mine] and given a funding boost. The five networks are now called “global innovation clusters,” [emphasis mine] and will receive $750 million over six years, which is half of what they had reportedly asked for. Many universities and research institutions are members of the five clusters, which are meant to bring together government, academia, and industry to create new companies, jobs, intellectual property, and boost economic growth.
Other notable innovation-related investments include the launch of a critical minerals strategy, which will give the country’s mining sector $3.8 billion over eight years. The strategy will support the development of a domestic zero-emission vehicle value chain, including for batteries (which are produced using critical minerals). The National Research Council will receive funding through the strategy, shared with Natural Resources Canada, to support new technologies and bolster supply chains of critical minerals such as lithium and cobalt. The government has also targeted investments in the semiconductor industry ($45 million over four years), the CAN Health Network ($40 million over four years), and the Canadian High Arctic Research Station ($14.5 million over five years).
Canada’s higher education institutions did notch a win with a major investment in agriculture research. The government will provide $100 million over six years to support postsecondary research in developing new agricultural technologies and crop varieties, which could push forward net-zero emissions agriculture.
…
The Canada Excellence Research Chairs program received $38.3 million in funding over four years beginning in 2023-24, with the government stating this could create 12 to 25 new chair positions.
To support Canadian cybersecurity, which is a key priority under the government’s $8 billion defence umbrella, the budget gives $17.7 million over five years and $5.5 million thereafter until 2031-32 for a “unique research chair program to fund academics to conduct research on cutting-edge technologies” relevant to the Communications Security Establishment – the national cryptologic agency. The inaugural chairs will split their time between peer-reviewed and classified research.
…
The federal granting councils will be given $40.9 million over five years beginning in 2022-23, and $9.7 million ongoing, to support Black “student researchers,” who are among the underrepresented groups in the awarding of scholarships, grants and fellowships. Additionally, the federal government will give $1.5 million to the Jean Augustine Chair in Education, Community and Diaspora, housed at York University, to address systemic barriers and racial inequalities in the Canadian education system and to improve outcomes for Black students.
…
A pretty comprehensive listing of all the science-related funding in the 2022 budget can be found in an April 7, 2022 posting on the Evidence for Democracy (E4D) blog,
The CSPC Budget Symposium will be held on Thursday April 21 [2022] at 12:00 pm (EST), and feature numerous speakers from across the country and across different sectors, in two sessions and one keynote presentation by Dave Watters titled: “Decoding Budget 2022 for Science and Innovation”.
Don’t miss this session and all insightful discussions of the Federal Budget 2022.
By the way, David Watters gave the keynote address for the 2021 symposium too. Seeing his name twice now aroused my curiosity. Here’s a little more about David Watters (from a 2013 bio on the Council of Canadian Academies website), Note: He is still president,
David Watters is President of the Global Advantage Consulting Group, a strategic management consulting firm that provides advice to corporate, association, and government clients in Canada and abroad.
Mr. Watters worked for over 30 years in the federal public service in a variety of departments, including Energy Mines and Resources, Consumer and Corporate Affairs, Industry Canada (as Assistant Deputy Minister), Treasury Board Secretariat (in charge of Crown corporations and privatization issues), the Canadian Coast Guard (as its Commissioner) and Finance Canada (as Assistant Deputy Minister for Economic Development and Corporate Finance). He then moved to the Public Policy Forum where he worked on projects dealing with the innovation agenda, particularly in areas such as innovation policy, health reform, transportation, and the telecommunications and information technology sectors. He also developed reports on the impact of the Enron scandal and other corporate and public sector governance problems for Canadian regulators.
Since starting the Global Advantage Consulting Group in 2002, Mr. Watters has assisted a variety of public and private clients. His areas of specialization and talent are in creating visual models for policy development and decision making, and business models for managing research and technology networks. He has also been an adjunct professor at the Telfer School of Management at the University of Ottawa, teaching International Negotiation.
Mr. Watters holds a Bachelor’s degree in Economics from Queen’s University as well as a Law degree in corporate, commercial and tax law from the Faculty of Law at Queen’s University.
So, an economist, lawyer, and government bureaucrat is going to analyze the budget with regard to science and R&D? If I had to guess, I’d say he’s going to focus on** ‘innovation’ which I’m decoding as a synonym for ‘business/commercialization’.
Getting back to the budget, it’s pretty medium where science is concerned with more than one ‘re-announcement’**. As the pundits have noted, the focus is on deficit reduction and propping up the economy.
ETA April 20, 2022: There’s been a keynote speaker change, from an April 20, 2022 CSPC announcement (received via email),
… keynote presentation by Omer Kaya, CEO of Global Advantage Consulting Group. Unfortunately, due to unexpected circumstances, Dave Watters will not be presenting at this session as expected before.
**Two minor changes made, ‘in’ to ‘on’ and a hyphen (-) replaced by a single quote (‘) on March 30, 2023.
The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,
The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.
As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.
Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.
What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.
In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.
In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.
At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).
The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.
The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?
I’ll get back to that last bit, “… what does it mean to be human?” later.
There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.
In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.
David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.
Drilling down
I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.
In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).
Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.
The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)
In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.
The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.
Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.
Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”
AI and creativity
The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.
Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)
The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.
There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.
What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),
… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI.
This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]
Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]
“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]
So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)
The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),
…
The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.
…
[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.
…
“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”
Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),
You can hear Burroughs talk about the technique and how he started using it in 1959.
There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.
Kazuo Ishiguro?
Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?
AI and emotions
The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.
Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.
(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)
While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?
While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).
My February 14, 2019 posting features research with a completely different approach to emotions and machines,
“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”
This brings the question back to, what is consciousness?
What scientists aren’t taught
Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)
My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.
Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.
The experts, the connections, and the Canadian content
It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.
Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).
I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.
As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,
Google invests $4.5 Million in Montreal AI Research
…
A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].
Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:
Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),
Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.
In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.
“COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.“
– Yoshua Bengio, for Google’s Official Canada Blog
Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.
My hat’s off to Google’s marketing communications and public relations teams.
Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”
Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”
There is this from his LinkedIn profile,
I develop, create and host engaging live experiences & media to foster critical thinking.
I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.
There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)
Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.
Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.
Evolution
Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.
I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,
Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.
Get ready for Xenobots 2.0.
Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”
Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,
…
While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.
And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.
Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?
As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.
David Suzuki, where are you?
Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.
Artificial stupidity
Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,
Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.
Knight was using the term in its humorous, derogatory form.
Finally
The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.
To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.
Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.
For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.
*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.
The AICan (Artificial Intelligence Canada) Bulletin is published by CIFAR (Canadian Institute For Advanced Research) and it is the official newsletter for the Pan-Canadian AI Strategy. This is a joint production from CIFAR, Amii (Alberta Machine Intelligence Institute), Mila (Quebec’s Artificial Intelligence research institute) and the Vector Institute for Artificial Intelligence (Toronto, Ontario).
For anyone curious about the Pan-Canadian Artificial Intelligence Strategy, first announced in the 2017 federal budget, I have a March 31, 2017 post which focuses heavily on the, then new, Vector Institute but it also contains information about the artificial intelligence scene in Canada at the time, which is at least in part still relevant today.
The AICan Bulletin October 2021 issue number 16 (The Energy and Environment Issue) is available for viewing here and includes these articles,
The effects of climate change significantly impact our most vulnerable populations. Canada CIFAR AI Chair David Rolnick (Mila) and Tami Vasanthakumaran (Girls Belong Here) share their insights and call to action for the AI research community.
Canada CIFAR AI Chair Samira Kahou (Mila) is using AI to detect and predict extreme weather events to aid in disaster management and raise awareness for the climate crisis.
Amii, the University of Alberta, and ISL Engineering explores how machine learning can make water treatment more environmentally friendly and cost-effective with the support of Amii Fellows and Canada CIFAR AI Chairs — Adam White, Martha White and Csaba Szepesvári.
Immerse yourself into this AI-driven virtual experience based on empathy to visualize the impacts of climate change on places you hold dear with Mila.
…
The bulletin also features AI stories from Canada and the US, as well as, events and job postings.
I found two different pages where you can subscribe. First, there’s this subscription page (which is at the bottom of the October 2021 bulletin and then, there’s this page, which requires more details from you.
I’ve taken a look at the CIFAR website and can’t find any of the previous bulletins on it, which would seem to make subscription the only means of access.
As more than one observer has noted, this April 19, 2021 budget is the first in two years. Predictably, there has been some distress over the copious amounts of money being spent to stimulate/restart the economy whether it needs it or not. Some have described this as a pre-election budget. Overall, there seems to be more satisfaction than criticism.
Maybe a little prescient?
After mentioning some of the government’s issues with money (Phoenix Payroll System debacle and WE Charity scandal) in my April 13, 2021 posting about the then upcoming Canadian Science Policy Centre’s post-budget symposium, I had these comments (which surprise even me),
…
None of this has anything to do with science funding (as far as I know) but it does set the stage for questions about how science funding is determined and who will be getting it. There are already systems in place for science funding through various agencies but the federal budget often sets special priorities such as the 2017 Pan-Canadian Artificial Intelligence Strategy [emphasis added April 29, 2021] with its attendant $125M. As well,Prime Minister Justin Trudeau likes to use science as a means of enhancing his appeal. [emphasis mine] See my March 16, 2018 posting for a sample of this, scroll down to the “Sunny ways: a discussion between Justin Trudeau and Bill Nye” subhead.
…
Budget 2021 introduced two new strategies, the first ones since the 2017 budget: the Pan-Canadian Genomics Strategy and the National Quantum Strategy. As for whether this ploy will help enhance Trudeau’s appeal, that seems doubtful given his current plight (see an April 27, 2021 CBC online news item “PM says his office didn’t know Vance allegations were about sexual misconduct” for a description of some of Trudeau’s latest political scandal).
Science in the 2021 budget (a few highlights)
For anyone who wants to take a look at the 2021 Canadian Federal Budget, Chapters Four and Five (in Part Two) seems to contain the bulk of the science funding announcements. Here are the highlights, given my perspective, from Chapter Four (Note: I don’t chime in again until the “A full list …. subhead):
4.6 Investing in World-leading Research and Innovation
A plan for a long-term recovery must look to challenges and opportunities that lie ahead in the years and decades to come. It must be led by a growth strategy that builds on the unique competitive advantages of the Canadian economy, and make sure that Canada is well-positioned to meet the demands of the next century. This work begins with innovation.
To drive growth and create good, well-paying jobs, entrepreneurs and businesses need to be able to translate Canada’s world-class leadership in research into innovative products and services for Canadians, and for the world.
These investments will help cement Canada’s position as a world leader in research and innovation, building a global brand that will attract talent and capital for years to come.
Supporting Innovation and Industrial Transformation
Since its launch in 2017, the Strategic Innovation Fund has been helping businesses invest, grow, and innovate in Canada. Through its efforts to help businesses make the investments they need to succeed, the fund is well-placed to support growth and the creation of good jobs across the Canadian economy—both now and in the future.
Budget 2021 proposes to provide the Strategic Innovation Fund with an incremental $7.2 billion over seven years on a cash basis, starting in 2021-22, and $511.4 million ongoing. This funding will be directed as follows:
$2.2 billion over seven years, and $511.4 million ongoing to support innovative projects across the economy—including in the life sciences, automotive, aerospace, and agriculture sectors.
$5 billion over seven years to increase funding for the Strategic Innovation Fund’s Net Zero Accelerator, as detailed in Chapter 5. Through the Net Zero Accelerator the fund would scale up its support for projects that will help decarbonize heavy industry, support clean technologies and help meaningfully accelerate domestic greenhouse gas emissions reductions by 2030.
The funding proposed in Budget 2021 will build on the Strategic Innovation Fund’s existing resources, including the $3 billion over five years announced in December 2020 for the Net Zero Accelerator. With this additional support, the Strategic Innovation Fund will target investments in important areas of future growth over the coming years to advance multiple strategic objectives for the Canadian economy:
$1.75 billion in support over seven years would be targeted toward aerospace in recognition of the longer-lasting impacts to this sector following COVID-19. This is in addition to the $250 million Aerospace Regional Recovery Initiative, outlined in section 4.2, providing a combined support of $2 billion to help this innovative sector recover and grow out of the crisis.
$1 billion of support over seven years would be targeted toward growing Canada’s life sciences and bio-manufacturing sector, restoring capabilities that have been lost and supporting the innovative Canadian firms and jobs in this sector. This is an important component of Canada’s plan to build domestic resilience and improve long-term pandemic preparedness proposed in Chapter 1, providing a combined $2.2 billion over seven years.
$8 billion over seven years for the Net Zero Accelerator to support projects that will help reduce Canada’s greenhouse gas emissions by expediting decarbonization projects, scaling-up clean technology, and accelerating Canada’s industrial transformation. More details are in Chapter 5.
Renewing the Pan-Canadian Artificial Intelligence Strategy
Artificial intelligence is one of the greatest technological transformations of our age. Canada has communities of research, homegrown talent, and a diverse ecosystem of start-ups and scale-ups. But these Canadian innovators need investment in order to ensure our economy takes advantage of the enormous growth opportunities ahead in this sector. By leveraging our position of strength, we can also ensure that Canadian values are embedded across widely used, global platforms.
Budget 2021 proposes to provide up to $443.8 million over ten years, starting in 2021-22, in support of the Pan-Canadian Artificial Intelligence Strategy, including:
$185 million over five years, starting in 2021-22, to support the commercialization of artificial intelligence innovations and research in Canada.
$162.2 million over ten years, starting in 2021-22, to help retain and attract top academic talent across Canada—including in Alberta, British Columbia, Ontario, and Quebec. This programming will be delivered by the Canadian Institute for Advanced Research.
$48 million over five years, starting in 2021-22, for the Canadian Institute for Advanced Research to renew and enhance its research, training, and knowledge mobilization programs.
$40 million over five years, starting in 2022-23, to provide dedicated computing capacity for researchers at the national artificial intelligence institutes in Edmonton, Toronto, and Montréal.
$8.6 million over five years, starting in 2021-22, to advance the development and adoption of standards related to artificial intelligence.
Launching a National Quantum Strategy
Quantum technology is at the very leading edge of science and innovation today, with enormous potential for commercialization. This emerging field will transform how we develop and design everything from life-saving drugs to next generation batteries, and Canadian scientists and entrepreneurs are well-positioned to take advantage of these opportunities. But they need investments to be competitive in this fast growing global market.
Budget 2021 proposes to provide $360 million over seven years, starting in 2021-22, to launch a National Quantum Strategy. The strategy will amplify Canada’s significant strength in quantum research; grow our quantum-ready technologies, companies, and talent; and solidify Canada’s global leadership in this area. This funding will also establish a secretariat at the Department of Innovation, Science and Economic Development to coordinate this work.
The government will provide further details on the rollout of the strategy in the coming months.
Revitalizing the Canadian Photonics Fabrication Centre
Canada is a world leader in photonics, the technology of generating and harnessing the power of light. This is the science behind fibre optics, advanced semi-conductors, and other cutting-edge technologies, and there is a strong history of Canadian companies bringing this expertise to the world. The National Research Council’s Canadian Photonics Fabrication Centre supplies photonics research, testing, prototyping, and pilot-scale manufacturing services to academics and large, small and medium-sized photonics businesses in Canada. But its aging facility puts this critical research and development at risk.
Budget 2021 proposes to provide $90 million over five years on a cash basis, starting in 2021-22, to the National Research Council to retool and modernize the Canadian Photonics Fabrication Centre. This would allow the centre to continue helping Canadian researchers and companies grow and support highly skilled jobs.
Launching a Pan-Canadian Genomics Strategy
Genomics research is developing cutting-edge therapeutics and is helping Canada track and fight COVID-19. Canada was an early mover in advancing genomics science and is now a global leader in the field. A national approach to support genomics research can lead to breakthroughs that have real world applications. There is an opportunity to improve Canadians’ health and well-being while also creating good jobs and economic growth. Leveraging and commercializing this advantage will give Canadian companies, researchers, and workers a competitive edge in this growing field.
Budget 2021 proposes to provide $400 million over six years, starting in 2021-22, in support of a Pan-Canadian Genomics Strategy. This funding would provide $136.7 million over five years, starting in 2022-23, for mission-driven programming delivered by Genome Canada to kick-start the new Strategy and complement the government’s existing genomics research and innovation programming.
Further investments to grow Canada’s strengths in genomics under the Strategy will be announced in the future.
Conducting Clinical Trials
Canadian scientists are among the best in the world at conducting high-quality clinical trials. Clinical trials lead to the development of new scientifically proven treatments and cures, and improved health outcomes for Canadians. They also create good jobs in the health research sector, including the pharmaceutical sector, and support the creation of new companies, drugs, medical devices, and other health products.
Budget 2021 proposes to provide $250 million over three years, starting in 2021-22, to the Canadian Institutes of Health Research to implement a new Clinical Trials Fund.
Supporting the Innovation Superclusters Initiative
Since it was launched in 2017, the Innovation Superclusters Initiative has helped Canada build successful innovation ecosystems in important areas of the economy. Drawing on the strength and breadth of their networks, the superclusters were able to quickly pivot their operations and played an important role in Canada’s COVID-19 response. For example, the Digital Technology Supercluster allocated resources to projects that used digital technologies and artificial intelligence to help facilitate faster, more accurate diagnosis, treatment, and care of COVID-19 patients.
To help ensure those superclusters that made emergency investments to support Canada’s COVID-19 response and others can continue supporting innovative Canadian projects:
Budget 2021 proposes to provide $60 million over two years, starting in 2021-22, to the Innovation Superclusters Initiative.
Promoting Canadian Intellectual Property
As the most highly educated country in the OECD, Canada is full of innovative and entrepreneurial people with great ideas. Those ideas are valuable intellectual property that are the seeds of huge growth opportunities. Building on the National Intellectual Property Strategy announced in Budget 2018, the government proposes to further support Canadian innovators, start-ups, and technology-intensive businesses. Budget 2021 proposes:
$90 million, over two years, starting in 2022-23, to create ElevateIP, a program to help accelerators and incubators provide start-ups with access to expert intellectual property services.
$75 million over three years, starting in 2021-22, for the National Research Council’s Industrial Research Assistance Program to provide high-growth client firms with access to expert intellectual property services.
These direct investments would be complemented by a Strategic Intellectual Property Program Review that will be launched. It is intended as a broad assessment of intellectual property provisions in Canada’s innovation and science programming, from basic research to near-commercial projects. This work will make sure Canada and Canadians fully benefit from innovations and intellectual property.
Capitalizing on Space-based Earth Observation
Earth observation satellites support critical services that Canadians rely on. They provide reliable weather forecasts, support military and transport logistics, help us monitor and fight climate change, and support innovation across sectors, including energy and agriculture. They also create high-quality jobs in Canada and the government will continue to explore opportunities to support Canadian capacity, innovation, and jobs in this sector. To maintain Canada’s capacity to collect and use important data from these satellites, Budget 2021 proposes to provide:
$80.2 million over eleven years, starting in 2021-22, with $14.9 million in remaining amortization and $6.2 million per year ongoing, to Natural Resources Canada and Environment and Climate Change Canada to replace and expand critical but aging ground-based infrastructure to receive satellite data.
$9.9 million over two years, starting in 2021-22, to the Canadian Space Agency to plan for the next generation of Earth observation satellites.
Science and Technology Collaboration with Israeli Firms
Collaborating with global innovation leaders allows Canadian companies to leverage expertise to create new products and services, support good jobs, and reach new export markets.
Budget 2021 proposes to provide additional funding of $10 million over five years, starting in 2021-2022, and $2 million per year ongoing, to expand opportunities for Canadian SMEs to engage in research and development partnerships with Israeli SMEs as part of the Canadian International Innovation Program. This will be sourced from existing Global Affairs Canada resources. The government also intends to implement an enhanced delivery model for this program, including possible legislation.
4.7 Supporting a Digital Economy
More and more of our lives are happening online—from socializing, to our jobs, to commerce. Recognizing the fundamental shifts underway in our society, the government introduced a new Digital Charter in 2020 that seeks to better protect the privacy, security, and personal data of Canadians, building trust and confidence in the digital economy.
To make sure that Canadian businesses can keep pace with this digital transformation and that they are part of this growth, Budget 2021 includes measures to ensure businesses and workers in every region of the country have access to fast, reliable internet. It also has measures to make sure that the digital economy is fair and well reported on.
A digital economy that serves and protects Canadians and Canadian businesses is vital for long-term growth.
Accelerating Broadband for Everyone
The COVID-19 pandemic has shifted much of our lives online and transformed how we live, work, learn, and do business. This makes it more important than ever that Canadians, including Canadian small businesses in every corner of this country, have access to fast and reliable high-speed internet. Canadians and Canadian businesses in many rural and remote communities who still do not have access to high-speed internet face a barrier to equal participation in the economy. Building on the $6.2 billion the federal government and federal agencies have made available for universal broadband since 2015:
Budget 2021 proposes to provide an additional $1 billion over six years, starting in 2021-22, to the Universal Broadband Fund to support a more rapid rollout of broadband projects in collaboration with provinces and territories and other partners. This would mean thousands more Canadians and small businesses will have faster, more reliable internet connections.
In total, including proposed Budget 2021 funding, $2.75 billion will be made available through the Universal Broadband Fund to support Canadians in rural and remote communities. Recently, the Universal Broadband Fund provided funding to ensure Quebec could launch Operation High Speed, connecting nearly 150,000 Quebecers to high-speed internet. These continuing investments will help Canada accelerate work to reach its goal of 98 per cent of the country having high-speed broadband by 2026 and 100 per cent by 2030.
Establishing a New Data Commissioner
Digital and data-driven technologies open up new markets for products and services that allow innovative Canadians to create new business opportunities—and high-value jobs. But as the digital and data economy grows, Canadians must be able to trust that their data are protected and being used responsibly.
Budget 2021 proposes to provide $17.6 million over five years, starting in 2021-22, and $3.4 million per year ongoing, to create a Data Commissioner. The Data Commissioner would inform government and business approaches to data-driven issues to help protect people’s personal data and to encourage innovation in the digital marketplace.
Budget 2021 also proposes to provide $8.4 million over five years, starting in 2021-22, and $2.3 million ongoing, to the Standards Council of Canada to continue its work to advance industry-wide data governance standards.
…
A full list of science funding highlights from the 2021 federal budget
If you don’t have the time or patience to comb through the budget for all of the science funding announcements, you can find an excellent list in an April 19, 2021posting on Evidence for Democracy (Note: Links have been removed; h/t Science Media Centre of Canada newsletter),
Previously, we saw a landmark budget for science in 2018, which made historic investments in fundamental research totaling more than $1.7 billion. This was followed by additional commitments in 2019 that included expanded support for research trainees and access to post-secondary education. While no federal budget was tabled in 2020, there have been ongoing investments in Canadian science throughout the pandemic.
Budget 2021 attempts to balance the pressing challenges of the pandemic with a long-term view towards recovery and growth. We are pleased to see strategic investments across the Canadian science ecosystem, including targeted research funding in artificial intelligence, quantum technologies, and bioinnovation. There is also a focus on climate action, which outlines a $17.6 billion investment towards green recovery and conservation. There are also noteworthy investments in research and development partnerships, and data capacity. Beyond research, Budget 2021 includes investments in childcare, mental health, Indigenous communities, post-secondary education, and support for gender-based and Black-led initiatives.
We note that this budget does not include significant increases to the federal granting agencies, or legislation to safeguard the Office of the Chief Science Advisor.
Below, we highlight key research-related investments in Budget 2021.
Is it magic or how does the federal budget get developed?
I believe most of the priorities are set by power players behind the scenes. We glimpsed some of the dynamics courtesy of the WE Charity scandal 2020/21 and the SNC-Lavalin scandal in 2019.
Access to special meetings and encounters are not likely to be given to any member of the ‘great unwashed’ but we do get to see the briefs that are submitted in anticipation of a new budget. These briefs and meetings with witnesses are available on the Parliament of Canada website (Standing Committee on Finance (FINA) webpage for pre-budget consultations.
For the 2021 federal budget, there are 792 briefs and transcripts of meeting with 52 witnesses. Whoever designed the page decided to make looking at more than one or two briefs onerous. Just click on a brief that interests you and try to get back to the list.
National Quantum Strategy
There is a search function but ‘quantum’ finds only Xanadu Quantum Technologies (more about their brief in a minute) and not D-Wave Systems, which is arguably a more important player in the field. Regardless, both companies presented briefs although the one from Xanadu was of the most interest as it seems to be recommending a national strategy without actually using the term (from the Xanadu Quantum Technologies budget 2021 brief),
Recommendation 1: Quantum Advisory Board
The world is at the beginning of the second Quantum Revolution, which will result in the development and deployment of revolutionary quantum technologies, based upon the scientific discoveries of the past century. Major economies of the world, including the USA, China, Japan, EU, UK and South Korea, have all identified quantum technologies as strategically important, and have adopted national strategies or frameworks. Many of them have dedicated billions of dollars of funding to quantum technology R&D and commercialization. We urge the government to create a Quantum Advisory Board or Task Force, to ensure a coherent national strategy which involves all areas of government:research, education, industry, trade, digital government, transportation, health, defence,etc.
Recommendation 2: Continue Supporting Existing Research Centres
Canada has a long history of nurturing world-class academic research in quantum science at our universities. The CFREF [Canada First Research Excellence Fund {CFREF}] program was a welcome catalyst which solidified the international stature of the quantum research programs at UBC [University of British Columbia], Waterloo [University of Waterloo; Ontario] and Sherbrooke [University of Sherbrooke; Québec]. Many of our highly qualified team members have graduated from these programs and other Canadian universities. We urge the government to continue funding these research centers past the expiration of the CFREF program, to ensure the scientific critical mass is not dissipated, and the highly sought-after talent is not pulled away to other centers around the world.
Recommendation 3: National Quantum Computing Access Centre
Our Canadian competitor, D-Wave Systems, was started in Canada nearly 20 years ago,and has yet to make significant sales or build a strong user base within Canada. At Xanadu we also find that the most ready customers for our computers are researchers in the USA,rather than in Canada, despite the strong interest from many individual professors we speak with at a number of Canadian universities. We urge the government to create a National Quantum Computing Access Centre, through Compute Canada or another similar national organization, which can centralize and coordinate the provision of quantum computing access for the Canadian academic research community. Without access to these new machines, Canadian researchers will lose their ability to innovate new algorithms and applications of this groundbreaking technology. It will be impossible to train the future workforce of quantum programmers, without access to the machines like those of D-Wave and Xanadu.
Recommendation 4: National Quantum Technology Roundtable
Traditional, resource-based Canadian industries are not historically known for the ir innovative adoption of new technology, and the government has created many programs to encourage digitalization of manufacturing and resource industries, and also newer,cleaner technology adoption in the energy and other heavy industries. Quantum technologies in computing, communications and sensing have the potential to make exponential improvements in many industries, including: chemicals, materials, logistics,transportation, electricity grids, transit systems, wireless networks, financial portfolio analysis and optimization, remote sensing, exploration, border security, and improved communication security. We urge the government to convene national roundtable discussions, perhaps led by the NRC, to bring together the Canadian researchers and companies developing these new technologies, along with the traditional industries and government bodies of Canada who stand to benefit from adopting them, for mutual education and information sharing, roadmapping, benchmarking and strategic planning.
Recommendation 5: New Quantum Computing Institute in Toronto
The University of Toronto is the leading research institution in Canada, and one of the top research universities in the world. Many world-class scientists in quantum physics,chemistry, computer science, and electrical engineering are currently part of the Centre for Quantum Information and Quantum Control (CQIQC) at the university [University of Toronto]. British Columbia has recently announced the creation of a new institute dedicated to the study of Quantum Algorithms, and we encourage the government to build upon the existing strengths of the quantum research programs at the CQIQC, through the funding of a new,world-class research institute, focussed on quantum computing. Such an institute will leverage not only the existing quantum expertise, but also the world-class artificial intelligence and machine learning research communities in the city. The tech industry in Toronto is also the fastest growing in North America, hiring more than San Francisco or Boston. We request the government fund the establishment of a new quantum computing institute built on Toronto’s 3 pillars of quantum research, artificial intelligence, and a thriving tech industry, to create a center of excellence with global impact.
Recommendation 6: Dedicated BDC [Business Development Bank of Canada] Quantum Venture Fund
Although there is no major international firm developing and selling quantum-based technology from Canada, a number of the world’s most promising start-ups are based here. Xanadu and our peer firms are now actively shaping our business models; refining our products and services; undertaking research and development; and developing networks of customers.To date, Canadian firms like Xanadu have been successful at raising risk capital from primarily domestic funds like BDC, OMERS, Georgian Partners and Real Ventures,without having to leave the country. In order to ensure a strong “Quantum Startup”ecosystem in Canada, we request that the BDC be mandated to establish a specialist quantum technology venture capital fund. Such a fund will help ensure the ongoing creation of a whole cluster of Canadian startups in all areas of Quantum Technology, and help to keep the technologies and talent coming from our research universities within the country.
Christian Weedbrook, Xanadu Chief Executive Officer, has taken the time to dismiss his chief competitor and managed to ignore the University of Calgary in his Canadian quantum future. (See my September 21, 2016 posting “Teleporting photons in Calgary (Canada) is a step towards a quantum internet,” where that team set a record for distance.)
The D-Wave Systems budget 2021 brief does have some overlapping interests but is largely standalone and more focused on business initiatives and on the US. Both briefs mention the Quantum Algorithms Institute (QAI), which is being established at Simon Fraser University (SFU) with an investment from the government of British Columbia (see this Oct. 2, 2019 SFU press release).
Where Weedbrook is passionately Canadian and signed the Xanadu brief himself, the D-Wave brief is impersonal and anonymous.
•Recommendation 1: That the government invest in mission-driven research —with line-of-sight to application —to mobilize genomics to drive Canada’s recovery in key sectors.
•Recommendation 2: That the government invest in a national genomics data strategy to drive data generation, analysis, standards, tools, access and usage to derive maximum value and impact from Canada’s genomics data assets.
•Recommendation 3: That the government invest in training of the next generation of genomics researchers, innovators and entrepreneurs to support the development of a genomics-enabled Canadian bioeconomy.
•Recommendation 4: That the government invest in long-term and predictable research and research infrastructure through the federal granting agencies and the Canada Foundation forInnovation to ensure a strong and vibrant knowledge base for recovery.
…
It’s not an exciting start but if you continue you’ll find a well written and compelling brief.
“The federal government announced $400 million for a new Pan-Canadian Genomics Strategy, including $136.7 million for Genome Canada to kickstart the Strategy, with further investments to be announced in the future. The budget recognized the key role genomics plays in developing cutting-edge therapeutics and in helping Canada track and fight COVID-19. It recognizes that Canada is a global leader in the field and that genomics can improve Canadians’ health and wellbeing while also creating good jobs and economic growth. Leveraging and commercializing this advantage will give Canadian companies, researchers, and workers a competitive edge in this growing field.
… Today’s announcement included excellent news for Canada’s long-term sustainable economic growth in biomanufacturing and the life sciences, with a total of $2.2 billion over seven years going toward growing life sciences, building up Canada’s talent pipeline and research systems, and supporting life sciences organizations.
Genome Canada welcomes other investments that will strengthen Canada’s research, innovation and talent ecosystem and drive economic growth in sectors of the future, including:
$500 million over four years, starting in 2021-22, for the Canada Foundation for Innovation to support the bio-science capital and infrastructure needs of post-secondary institutions and research hospitals;
$250 million over four years, starting in 2021-22, for the federal research granting councils to create a new tri-council biomedical research fund;
$250 million over three years, starting in 2021-22, to the Canadian Institutes of Health Research to implement a new Clinical Trials Fund;
$92 million over four years, starting in 2021-22, for adMare to support company creation, scale up, and training activities in the life sciences sector;
$59.2 million over three years, starting in 2021-22, for the Vaccine and Infectious Disease Organization to support the development of its vaccine candidates and expand its facility in Saskatoon;
$45 million over three years, starting in 2022-23, to the Stem Cell Network to support stem cell and regenerative medicine research; and
$708 million over five years, starting in 2021-22, to Mitacs to create at least 85,000 work-integrated learning placements that provide on-the-job learning and provide businesses with support to develop talent and grow.
The visionary support announced in Budget 2021 puts Canada on competitive footing with other G7 nations that have made major investments in research and innovation to drive high-value growth sectors, while placing bio-innovation at the heart of their COVID-19 recoveries. Genome Canada looks forward to leading the new Pan-Canadian Genomics Strategy and to working with Innovation, Science and Economic Development Canada and other partners on the strategic investments announced today.
“To solve complex global problems, such as a worldwide pandemic and climate change, we need transdisciplinary approaches. The life sciences will play significant roles within such an approach. The funding announced today will be instrumental in driving Genome Canada’s mission to be Canada’s genomics platform for future pandemic preparedness, its capacity for biomanufacturing, and its bio-economy overall.”
– Dr. Rob Annan, President and CEO, Genome Canada
Canadian business innovation, science, and innovation—oxymoron?
Navdeep Bains was Canada’s Minister of Innovation, Science and Industry (2015-January 12, 2021) and he had a few things to say as he stepped away (from an April 16, 2021 article by Kevin Carmichael for PostMedia on the SaltWire; Atlantic Canada website),
Navdeep Bains earlier this spring [2021] spoke to me about his tenure as industry minister, which inevitably led to questions about Canada’s eroding competitiveness. He said that he thought he’d done a pretty good job of creating the conditions for a more innovative economy. But the corporate elite? Not so much.
“The ball is back in business’s court,” Bains said. “Frankly, if businesses don’t do this, I think in the long run they will struggle. They have to start changing their behaviour significantly.”
How’s that for a parting shot?
Bains wasn’t the first Canadian policy-maker to get frustrated by Corporate Canada’s aversion to risky bets on research and cutting-edge technology [emphasis mine]. But it’s been a long time since anyone in Ottawa tried to coax them to keep up with the times by dangling big sacks of cash in their faces. All they had to do was demonstrate some ambition and be willing to complement the federal government’s contribution with an investment of their own.
…
“He [Bains] was a great cheerleader,” said Mike Wessinger, chief executive of PointClickCare Technologies Inc., a Mississauga-based developer of software that helps long-term care homes manage data. “He would always proactively reach out. It was great that he cared.”
It’s easy to dismiss the importance of cheerleading. Canada’s digitally native companies were struggling to be taken seriously in Ottawa a decade ago. Former prime minister Stephen Harper pitched in with the Obama administration to save General Motors Co. and Chrysler Group LLC in 2009, but he let Nortel Networks Corp. fail. The technology industry needed a champion, and it found one in Bains.
…
Bains argued that his programs [legacy assessment] deserve more time. Industrial policy was still derided when he took over the industry department. It’s now mainstream. For now, that’s his legacy. It’s up to his former colleagues to write the final chapter.
I haven’t seen any OECD (Organization for Economic Cooperation and Development) figures recently but Canada’s industrial R&D (research development) has been on a downward slide for several years compared to many ‘developed’ countries.
A few final comments
I am intrigued by the inclusion of science and technology collaboration with Israeli firms (through the Canadian International Innovation Program) in the 2021 budget. It’s the only country to be specifically identified in this budget’s science funding announcements.
In fact, I can’t recall seeing any other budget of the last 10 years or so with mention of a specific country as a focus for Canadian science and technology collaboration. Perhaps Israeli companies are especially focused on industrial R&D and risk taking and they hope some of that will rub off on Canadians?
For anyone who might be curious as to the name difference between the new Pan-Canadian Genomic Strategy and the National Quantum Strategy, it may be due to the maturity (age) associated with the research field and its business efforts.
GenomeCanada (a Canadian government-funded not-for-profit agency founded in 2000) and its regional centres are the outcome of some national strategizing in the 1990s, from the GenomeCanada 20th anniversary webpage,
In the 1990s, the Human Genome Project captivates the world. But Canada doesn’t have a coordinated national approach. A group of determined Canadian scientists convinces the federal government to make a bold investment in genomics to ensure Canada doesn’t miss out on the benefits of this breakthrough science. Genome Canada is established on February 8, 2000.
Industry Association will accelerate the commercialization of Canada’s quantum sector – a $142.4B opportunity for Canadians.
TORONTO, Oct. 6, 2020 /CNW/ – A consortium of Canada’s leading quantum technology companies announced today that they are launching Quantum Industry Canada (QIC), an industry association with a mission to ensure that Canadian quantum innovation and talent is translated into Canadian business success and economic prosperity.
The twenty-four founding members represent Canada’s most commercial-ready quantum technologies, covering applications in quantum computing, quantum sensing, quantum communications, and quantum-safe cryptography.
…
It’s quite possible this National Quantum Strategy will result in a national not-for-profit agency and, eventually, a pan-Canadian strategy of its own. My impression is that competition in the life sciences research and business concerns is just as intense as in the quantum research and business concerns; the difference (as suggested earlier) lies in the maturity of, as well as, cultural differences between the communities.
The inclusion of a section on intellectual property in the budget could seem peculiar. I would have thought that years ago before I learned that governments measure and compare with other government the success of their science and technology efforts by the number of patents that have been filed. There are other measures but intellectual property is very important, as far as governments are concerned. My “Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP” June 28, 2012 posting points to some of the shortcomings, with which we still grapple.
To finally finish this off, Canadian Science Policy Centre has a call for 2021 Budget Editorial Call. (600-800 words)
ETA May 19, 2021: Well … here’s one more thing. If you’re interested in how basic funding for the sciences fared, check out Jim R. Woodgett’s May 8, 2021 posting on the Piece of Mind blog.
I think the French title for this call is more informative “L’UNESCO et Mila s’associent pour lancer un appel à publications afin de mettre en lumière les faiblesses du développement de l’IA [l’intelligence artificiel].” Here’s my translation (in advance, apologies to all who find it clumsy), ‘UNESCO {United Nations Educational, Scientific, and Cultural Organization) and MILA (Montreal Institute for Learning Algorithms) are issuing a joint call for papers illuminating weaknesses in the development of artificial intelligence.
UNESCO in cooperation with Mila-Quebec Artificial Intelligence Institute [?], is launching a Call for Proposals to identify blind spots in AI Policy and Programme Development. The collective work will explore creative, novel and far-reaching approaches to tackling blind spots in AI.
All contributors are invited to answer the same question: what are the blind spots on which we must shed light in order for AI to benefit all?
Issues can address 1) blind spots in the development of AI as a technology 2) blind spots in the development of AI as a sector, and 3) blind spots in the development of public policies, global governance, and regulation for AI. There are no limits to the subjects to be addressed. These blind spots could include issues ranging from science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots. Proposals can be in creative formats, and the call for proposals is open to individuals from all academic backgrounds and sectors. Proposals from all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).
Call for proposals are open until 2 May 2021.
Selected proposals will be confirmed by 25 May.
Final proposals, if in written format, should be between 5000-7000 words and should be written in a style that is accessible to non-AI specialists and received by 1 September 2021.
To ensure inclusivity and a diversity of voices, for accepted contributions outside of academia, authors may request financial support available on a needs-based basis up to 1000 usd.
I really appreciate the breadth of the call with a range of blind spots such as “science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots” and, presumably, anything the convenors had not considered.
As well, they haven’t confined themselves to the ‘same old, same old’ contributors, “all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).”
I’m glad to see a refreshing approach being taken to a call for proposals. I wish them good luck.
The Québec connection
Mila (Montreal Institute for Learning Algorithms), UNESCO’s co-host for this call, was founded in 1993 according to its About Mila page,
Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.
Since 2017, [emphasis mine] Mila is the result of a partnership between the Université de Montréal and McGill University, closely linked with Polytechnique Montréal and HEC Montréal. Today, Mila gathers in its offices a vibrant community of professors, students, industrial partners and startups working in AI, making the institute the world’s largest academic research center in machine learning.
Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.
Unmentioned, the Pan-Canadian Artificial Intelligence (AI) Strategy was created and funded by the Canadian federal government in 2017. One of the beneficiaries was Mila. (Odd how 2017 was the year Mila found so many academic partners in its home province.) From the Pan-Canadian AI strategy webpage on the Invest Canada website (Note: Links have been removed),
The artificial intelligence (AI) and machine learning revolution is well underway, and Canada is at its forefront. From top-ranked educational institutions and market-leading tech companies to world-renowned researchers, Canada’s AI ecosystems are leading global AI developments.
To continue to foster this growth and maintain its leadership position, Canada launched the $125M Pan-Canadian Artificial Intelligence Strategy in 2017—making it the first country to release a national AI strategy.
…
The Pan-Canadian AI Strategy is founded on a partnership between the Canadian Institute for Advanced Research (CIFAR) and the three centres of excellence: the Alberta Machine Intelligence Institute (AMII) in Edmonton, the Vector Institute in Toronto, and the Montreal Institute for Learning Algorithms (Mila) [all emphases mine] in Montreal. Together, they provide the support, resources, and talent for AI innovation and investment.
I don’t know where “Mila-Quebec Artificial Intelligence Institute” comes from. It’s not on their own website and I’ve never seen Mila called that anywhere other than on this UNESCO call.
Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.
Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.
Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.
“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”
The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.
“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”
The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.
For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”
At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.
“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”
“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.
This image illustrates the interplay between the various level dynamics,
Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.
Here’s a link, to and a citation for the special issue,
The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.
Two articles from the special were featured in the press release,
“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)
An AI governance publication from the US’s Wilson Center
Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,
Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg
Abstract
In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:
AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.
However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.
Canadian government and AI
The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.
There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)
Responsible use? Maybe not after 2019
First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?
For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?
What about the government’s digital service?
You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,
In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.
…
At the time, Simon was Director of Outreach at Code for Canada.
Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.
Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,
Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.
At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.
How it works
We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.
Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.
Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.
Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.
…
As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)
Does the Treasury Board of Canada have charge of responsible AI use?
I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.
The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.
…
It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,
I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.
But isn’t there a Chief Information Officer for Canada?
Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,
Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.
“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.
He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.
He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]
Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),
Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.
The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.
Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.
Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.
Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”
…
Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.
Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?
I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.
The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.
The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,
Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.
And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.
Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.
These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.
…
While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.
Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.
…
Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?
Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.
…
When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.
…
Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.
Instead, the Phoenix Pay system currently employs about 2,300. This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.
… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].
…
Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.
The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.
Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).
After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:
Insights and predictive modelling
Machine interactions
Cognitive automation
…
PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.
I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,
Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.
…
Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.
To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.
Canadian Institute for Advanced Research (CIFAR)
The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,
In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.
CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.
The objectives of the strategy are to:
Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.
Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.
Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.
Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.
Responsible AI at CIFAR
You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,
CIFAR is leading global conversations about AI’s impact on society.
The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.
Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),
BUILDING AN AI WORLD
Explore the landscape of global AI strategies.
Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.
…
I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.
Final comments about Responsible AI in Canada and the new reports
I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.
I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.
The great unwashed
What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.
I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.
Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen
Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.
The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,
The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.
Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.
In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.
Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.
…
Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.
…
Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”
…
Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”
…
The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.
Do* we really need senior government bureaucrats?
I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,
When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19
As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.
With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.
“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”
Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”
…
It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.
Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.
By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.
“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”
China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”
It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.
But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.
The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.
However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.
…
The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.
Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.
Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.
Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.
…
If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.
The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.
If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,
Responsible AI, eh?
Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.
Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.
Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.
A lot of mistakes have been made but we also do make a lot of good decisions.