Tag Archives: Canadian Institute for Advanced Research (CIFAR)

Future of Being Human: a call for proposals

The Canadian Institute for Advanced Research (CIFAR) is investigating the ‘Future of Being Human’ and has instituted a global call for proposals but there is one catch, your team has to have one person (with or without citizenship) who’s living and working in Canada. (Note: I am available.)

Here’s more about the call (from the CIFAR Global Call for Ideas: The Future of Being Human webpage),

New program proposals should explore the long term intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. 

We invite bold proposals from researchers at universities or research institutions that ask new questions about our complex emerging world. We are confronting challenging problems that require a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences [emphasis mine]) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity.

CIFAR is committed to creating a more diverse, equitable, and inclusive environment. We welcome proposals that include individuals from countries and institutions that are not yet represented in our research community.

Here’s a description, albeit, a little repetitive, of what CIFAR is asking researchers to do (from the Program Guide [PDF]),

For CIFAR’s next Global Call for Ideas, we are soliciting proposals related to The Future of Being Human, exploring in the long term the intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the natural world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. We invite bold proposals that ask new questions about our complex emerging world, where the issues under study are entangled and dynamic. We are confronting challenging problems that necessitate a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity. [p. 2 print; p. 4 PDF]

There is an upcoming information webinar (from the CIFAR Global Call for Ideas: The Future of Being Human webpage),

Monday, June 28, 2021 – 1:00pm – 1:45pm EDT

Webinar Sign-Up

Also from the CIFAR Global Call for Ideas: The Future of Being Human webpage, here are the various deadlines and additional sources of information,

August 17, 2021

Registration deadline

January 26, 2022

LOI [Letter of Intent] deadline

Spring 2022

LOIs invited to Full Proposal

Fall 2022

Full proposals due

March 2023

New program announcement and celebration

Resources

Program Guide [PDF]

Frequently Asked Questions

Good luck!

Council of Canadian Academies and its expert panel for the AI for Science and Engineering project

There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).

Now there’s this, in a March 31, 2020 Council of Canadian Academies (CCA) news release, four new projects were announced. (Admittedly these are not ‘public engagement’ exercises as such but the reports are publicly available and utilized by policymakers.) These are the two projects of most interest to me,

Public Safety in the Digital Age

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

Sponsor: Public Safety Canada

AI for Science and Engineering

The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.

This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.

Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])

For today’s posting the focus will be on the AI project, specifically, the April 19, 2021 CCA news release announcing the project’s expert panel,

The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.  

“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”

As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:

What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?

“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”

The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.

The Expert Panel on AI for Science and Engineering:

Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)

Julien Billot, CEO, Scale AI (Montreal, QC)

Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)

Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)

B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)

Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)

Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)

Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)

Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)

Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)

Who is the expert panel?

Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.

Statistics

Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).

The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.

(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.

Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.

As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.

Network of relationships

As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.

Individual panelists

Teresa Scassa (Ontario) whose SJD designation signifies a research doctorate in law chairs this panel. Offhand, I can recall only one or two other panels being chaired by women of the 10 or so I’ve reviewed. In addition to her profile page at the University of Ottawa, she hosts her own blog featuring posts such as “How Might Bill C-11 Affect the Outcome of a Clearview AI-type Complaint?” She writes clearly (I didn’t seen any jargon) for an audience that is somewhat informed on the topic.

Along with Dilhac, Teresa Scassa is a member of the AI Advisory Council of the Government of Canada. More about that group when you read Dilhac’s description.

Julien Billot (Québec) has provided a profile on LinkedIn and you can augment your view of M. Billot with this profile from the CreativeDestructionLab (CDL),

Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.

Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.

Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.

Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.

I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),

Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018

.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.

•Affiliated Faculty, History of Art and Architecture, March 2012-

.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]

….

[all emphases mine]

And these are some of her credentials,

Ph.D., English, Princeton University, 1999.
•Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.

M.A., English, Princeton University, 1994.

B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992.
•first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major

It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)

Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,

Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.

In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.

In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.

He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.

Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).

Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.

Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.

These kinds of connections are understandable and I have more to say about them in my final comments.

B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,

As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).

Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.

Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City].  In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.

It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.

Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),

Soft-funded Research Bursary

Dr. Teresa Scassa

2014

I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).

Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions

Experience

Microsoft Graphic
Software Engineer II – Machine Learning
Microsoft

Jul 2018 – Present – 2 years 10 months

Machine Learning – Commercial Software Engineering team

Serves on the CSE Responsible AI Board

Founder and Principal Researcher
Montreal AI Ethics Institute

May 2018 – Present – 3 years

Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai

Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program
U.S. Department of State

Aug 2019 – Present – 1 year 9 months

Selected to represent Canada on the future of work

Responsible AI Lead, Data Advisory Council
Northwest Commission on Colleges and Universities

Jun 2020 – Present – 11 months

Faculty Associate, Frankfurt Big Data Lab
Goethe University

Mar 2020 – Present – 1 year 2 months

Advisor for the Z-inspection project

Associate Member
LF AI Foundation

May 2020 – Present – 1 year

Author
MIT Technology Review

Sep 2020 – Present – 8 months

Founding Editorial Board Member, AI and Ethics Journal
Springer Nature

Jul 2020 – Present – 10 months

Education

McGill University Bachelor of Science (BS)Computer Science

2012 – 2015

Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.

Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,

Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB.  Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.

Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick].  His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]

I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.

(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)

Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),

King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989[3] for work on developing machine learning methods for protein structure prediction.[7]

King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology.[8][9] He is probably best known for the Robot Scientist[4][10][11][12][13][14][15][16][17] project which has created a robot that can:

hypothesize to explain observations

devise experiments to test these hypotheses

physically run the experiments using laboratory robotics

interpret the results from the experiments

repeat the cycle as required

The Robot Scientist Wikipedia entry has this to add,

… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.[2][6][7][8][9][10]

… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.[5][17][18]

Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),

Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.

Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis)[3] and a Turing Fellow at the Alan Turing Institute [emphases mine] in London.[4] She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences[5] and Associate Editor for the Harvard Data Science Review.[6] She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.[7]

Notice that Ross King and Sabina Leonelli both have links to The Alan Turing Institute (“We believe data science and artificial intelligence will change the world”), although the institute’s link to the University of Strathclyde (Scotland) where King studied seems a bit tenuous.

Do check out Leonelli’s profile at the University of Exeter as it’s comprehensive.

Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.

His Canadian Applied and Industrial Mathematics Society (CAIMS) biography page could be described as less charming (to me) than the 2013 profile but it is easier to read,

Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.

Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.

Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.

A last look at Spiteri’s 2013 profile gave me this (Note: Links have been removed),

Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:

The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.

The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.

So, we have the expert panel.

Thoughts about the panel and the report

As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”

Kudos to the Council of Canadian Academies (CCA) organizers.

That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.

There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.

The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.

Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)

I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.

As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.

Anyway, good luck with the report and I look forward to seeing it.

Governments need to tell us when and how they’re using AI (artificial intelligence) algorithms to make decisions

I have two items and an exploration of the Canadian scene all three of which feature governments, artificial intelligence, and responsibility.

Special issue of Information Polity edited by Dutch academics,

A December 14, 2020 IOS Press press release (also on EurekAlert) announces a special issue of Information Polity focused on algorithmic transparency in government,

Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.

Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.

Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.

“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”

The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.

“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”

The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.

For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”

At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.

“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”

“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.

This image illustrates the interplay between the various level dynamics,

Caption: Studying algorithms and algorithmic transparency from multiple levels of analyses. Credit: Information Polity.

Here’s a link, to and a citation for the special issue,

Algorithmic Transparency in Government: Towards a Multi-Level Perspective
Guest Editors: Sarah Giest, PhD, and Stephan Grimmelikhuijsen, PhD
Information Polity, Volume 25, Issue 4 (December 2020), published by IOS Press

The issue is open access for three months, Dec. 14, 2020 – March 14, 2021.

Two articles from the special were featured in the press release,

“The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making,” by Rik Peeters, PhD (https://doi.org/10.3233/IP-200253)

“A machine learning approach to open public comments for policymaking,” by Alex Ingrams, PhD (https://doi.org/10.3233/IP-200256)

An AI governance publication from the US’s Wilson Center

Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,

Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg

Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:

  • AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
  • Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
  • The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
  • The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
  • The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
  • As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.

Unfortunately, I haven’t been able to successfully download the working paper/report from the Wilson Center’s Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems webpage.

However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.

Canadian government and AI

The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.

There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)

Responsible use? Maybe not after 2019

First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?

For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?

What about the government’s digital service?

You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,

In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.

At the time, Simon was Director of Outreach at Code for Canada.

Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.

Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,

Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.

Learn more

After clicking on Learn more, I found this,

At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.

How it works

We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.

Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.

Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.

Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.

As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)

Does the Treasury Board of Canada have charge of responsible AI use?

I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.

For anyone not familiar with the Treasury Board or even if you are, December 14, 2009 article (Treasury Board of Canada: History, Organization and Issues) on Maple Leaf Web is quite informative,

The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.

It seems the Minister of Digital Government, Joyce Murray is part of the Treasury Board and the Treasury Board is the source for the Digital Operations Strategic Plan: 2018-2022,

I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.

But isn’t there a Chief Information Officer for Canada?

Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,

Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.

“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.

He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.

He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]

I cannot find a current Chief Information of Canada despite searches but I did find this List of chief information officers (CIO) by institution. Where there was one, there are now many.

Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),

Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.

The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.

Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.

Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.

Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”

Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.

Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?

I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.

The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.

The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,

Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.

And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.

Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.

These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.

While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.

Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.

Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?

Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.

When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.

Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.

Instead, the Phoenix Pay system currently employs about 2,300.  This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.

… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].

Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.

I found this on a Treasury Board webpage, all 1 minute and 29 seconds of it,

The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.

As for Public Services and Procurement Canada, they have an Artificial intelligence source list,

Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).

After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:

Insights and predictive modelling

Machine interactions

Cognitive automation

PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.

I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,

Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.

Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.

To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.

Canadian Institute for Advanced Research (CIFAR)

The first mention of the Pan-Canadian Artificial Intelligence Strategy is in my analysis of the Canadian federal budget in a March 24, 2017 posting. Briefly, CIFAR received a big chunk of that money. Here’s more about the strategy from the CIFAR Pan-Canadian AI Strategy homepage,

In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.

The objectives of the strategy are to:

Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.

Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.

Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.

Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.

Responsible AI at CIFAR

You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,

CIFAR is leading global conversations about AI’s impact on society.

The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.

Solution Networks

AI Futures Policy Labs

AI & Society Workshops

Building an AI World

Under the category of building an AI World I found this (from CIFAR’s AI & Society homepage),

BUILDING AN AI WORLD

Explore the landscape of global AI strategies.

Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.

I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.

Final comments about Responsible AI in Canada and the new reports

I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.

I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.

The great unwashed

What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.

I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.

Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen

Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.

The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,

The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.

Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.

In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.

Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.

Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.

Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”

Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”

The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.

Do* we really need senior government bureaucrats?

I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,

When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19

As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.

With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.

“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”

Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”

It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.

Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.

By late February [2020], Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.

“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”

China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”

It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.

But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.

The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.

However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.

The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July [2020], are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.

Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.

Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.

Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.

If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.

The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.

If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,

Responsible AI, eh?

Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.

Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.

Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.

A lot of mistakes have been made but we also do make a lot of good decisions.

*’Doe’ changed to ‘Do’ on May 14, 2021.

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

AI fairytale and April 25, 2018 AI event at Canada Science and Technology Museum*** in Ottawa

These days it’s all about artificial intelligence (AI) or robots and often, it’s both. They’re everywhere and they will take everyone’s jobs, or not, depending on how you view them. Today, I’ve got two artificial intelligence items, the first of which may provoke writers’ anxieties.

Fairytales

The Princess and the Fox is a new fairytale by the Brothers Grimm or rather, their artificially intelligent surrogate according to an April 18, 2018 article on the British Broadcasting Corporation’s online news website,

It was recently reported that the meditation app Calm had published a “new” fairytale by the Brothers Grimm.

However, The Princess and the Fox was written not by the brothers, who died over 150 years ago, but by humans using an artificial intelligence (AI) tool.

It’s the first fairy tale written by an AI, claims Calm, and is the result of a collaboration with Botnik Studios – a community of writers, artists and developers. Calm says the technique could be referred to as “literary cloning”.

Botnik employees used a predictive-text program to generate words and phrases that might be found in the original Grimm fairytales. Human writers then pieced together sentences to form “the rough shape of a story”, according to Jamie Brew, chief executive of Botnik.

The full version is available to paying customers of Calm, but here’s a short extract:

“Once upon a time, there was a golden horse with a golden saddle and a beautiful purple flower in its hair. The horse would carry the flower to the village where the princess danced for joy at the thought of looking so beautiful and good.

Advertising for a meditation app?

Of course, it’s advertising and it’s ‘smart’ advertising (wordplay intended). Here’s a preview/trailer,

Blair Marnell’s April 18, 2018 article for SyFy Wire provides a bit more detail,

“You might call it a form of literary cloning,” said Calm co-founder Michael Acton Smith. Calm commissioned Botnik to use its predictive text program, Voicebox, to create a new Brothers Grimm story. But first, Voicebox was given the entire collected works of the Brothers Grimm to analyze, before it suggested phrases and sentences based upon those stories. Of course, human writers gave the program an assist when it came to laying out the plot. …

“The Brothers Grimm definitely have a reputation for darkness and many of their best-known tales are undoubtedly scary,” Peter Freedman told SYFY WIRE. Freedman is a spokesperson for Calm who was a part of the team behind the creation of this story. “In the process of machine-human collaboration that generated The Princess and The Fox, we did gently steer the story towards something with a more soothing, calm plot and vibe, that would make it work both as a new Grimm fairy tale and simultaneously as a Sleep Story on Calm.” [emphasis mine]

….

If Marnell’s article is to be believed, Peter Freedman doesn’t hold much hope for writers in the long-term future although we don’t need to start ‘battening down the hatches’ yet.

You can find Calm here.

You can find Botnik  here and Botnik Studios here.

 

AI at Ingenium [Canada Science and Technology Museum] on April 25, 2018

Formerly known (I believe) [*Read the comments for the clarification] as the Canada Science and Technology Museum, Ingenium is hosting a ‘sold out but there will be a livestream’ Google event. From Ingenium’s ‘Curiosity on Stage Evening Edition with Google – The AI Revolution‘ event page,

Join Google, Inc. and the Canada Science and Technology Museum for an evening of thought-provoking discussions about artificial intelligence.

[April 25, 2018
7:00 p.m. – 10:00 p.m. {ET}
Fees: Free]

Invited speakers from industry leaders Google, Facebook, Element AI and Deepmind will explore the intersection of artificial intelligence with robotics, arts, social impact and healthcare. The session will end with a panel discussion and question-and-answer period. Following the event, there will be a reception along with light refreshments and networking opportunities.

The event will be simultaneously translated into both official languages as well as available via livestream from the Museum’s YouTube channel.

Seating is limited

THIS EVENT IS NOW SOLD OUT. Please join us for the livestream from the Museum’s YouTube channel. https://www.youtube.com/cstmweb *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 from someone at Ingenium.***

Speakers

David Usher (Moderator)

David Usher is an artist, best-selling author, entrepreneur and keynote speaker. As a musician he has sold more than 1.4 million albums, won 4 Junos and has had #1 singles singing in English, French and Thai. When David is not making music, he is equally passionate about his other life, as a Geek. He is the founder of Reimagine AI, an artificial intelligence creative studio working at the intersection of art and artificial intelligence. David is also the founder and creative director of the non-profit, the Human Impact Lab at Concordia University [located in Montréal, Québec]. The Lab uses interactive storytelling to revisualize the story of climate change. David is the co-creator, with Dr. Damon Matthews, of the Climate Clock. Climate Clock has been presented all over the world including the United Nations COP 23 Climate Conference and is presently on a three-year tour with the Canada Museum of Science and Innovation’s Climate Change Exhibit.

Joelle Pineau (Facebook)

The AI Revolution:  From Ideas and Models to Building Smart Robots
Joelle Pineau is head of the Facebook AI Research Lab Montreal, and an Associate Professor and William Dawson Scholar at McGill University. Dr. Pineau’s research focuses on developing new models and algorithms for automatic planning and learning in partially-observable domains. She also applies these algorithms to complex problems in robotics, health-care, games and conversational agents. She serves on the editorial board of the Journal of Artificial Intelligence Research and the Journal of Machine Learning Research and is currently President of the International Machine Learning Society. She is a AAAI Fellow, a Senior Fellow of the Canadian Institute for Advanced Research (CIFAR) and in 2016 was named a member of the College of New Scholars, Artists and Scientists by the Royal Society of Canada.

Pablo Samuel Castro (Google)

Building an Intelligent Assistant for Music Creators
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Philippe Beaudoin (Element AI)

Concrete AI-for-Good initiatives at Element AI
Philippe cofounded Element AI in 2016 and currently leads its applied lab and AI-for-Good initiatives. His team has helped tackle some of the biggest and most interesting business challenges using machine learning. Philippe holds a Ph.D in Computer Science and taught virtual bipeds to walk by themselves during his postdoc at UBC. He spent five years at Google as a Senior Developer and Technical Lead Manager, partly with the Chrome Machine Learning team. Philippe also founded ArcBees, specializing in cloud-based development. Prior to that he worked in the videogame and graphics hardware industries. When he has some free time, Philippe likes to invent new boardgames — the kind of games where he can still beat the AI!

Doina Precup (Deepmind)

Challenges and opportunities for the AI revolution in health care
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she leads the newly formed research team since October 2017.  She got her BSc degree in computer science form the Technical University Cluj-Napoca, Romania, and her MSc and PhD degrees from the University of Massachusetts-Amherst, where she was a Fulbright fellow. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control and other fields. She became a senior member of AAAI in 2015, a Canada Research Chair in Machine Learning in 2016 and a Senior Fellow of CIFAR in 2017.

Interesting, oui? Not a single expert from Ottawa or Toronto. Well, Element AI has an office in Toronto. Still, I wonder why this singular focus on AI in Montréal. After all, one of the current darlings of AI, machine learning, was developed at the University of Toronto which houses the Canadian Institute for Advanced Research (CIFAR),  the institution in charge of the Pan-Canadian Artificial Intelligence Strategy and the Vector Institutes (more about that in my March 31,2017 posting).

Enough with my musing: For those of us on the West Coast, there’s an opportunity to attend via livestream from 4 pm to 7 pm on April 25, 2018 on xxxxxxxxx. *** April 25, 2018: I received corrective information about the link for the livestream: https://youtu.be/jG84BIno5J4 and clarification as the relationship between Ingenium and the Canada Science and Technology Museum from someone at Ingenium.***

For more about Element AI, go here; for more about DeepMind, go here for information about parent company in the UK and the most I dug up about their Montréal office was this job posting; and, finally , Reimagine.AI is here.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

2017 proceedings for the Canadian Science Policy Conference

I received (via email) a December 11, 2017 notice from the Canadian Science Policy Centre that the 2017 Proceedings for the ninth annual conference (Nov. 1 – 3, 2017 in Ottawa, Canada) can now be accessed,

The Canadian Science Policy Centre is pleased to present you the Proceedings of CSPC 2017. Check out the reports and takeaways for each panel session, which have been carefully drafted by a group of professional writers. You can also listen to the audio recordings and watch the available videos. The proceedings page will provide you with the opportunity to immerse yourself in all of the discussions at the conference. Feel free to share the ones you like! Also, check out the CSPC 2017 reports, analyses, and stats in the proceedings.

Click here for the CSPC 2017 Proceedings

CSPC 2017 Interviews

Take a look at the 70+ one-on-one interviews with prominent figures of science policy. The interviews were conducted by the great team of CSPC 2017 volunteers. The interviews feature in-depth perspectives about the conference, panels, and new up and coming projects.

Click here for the CSPC 2017 interviews

Amongst many others, you can find a video of Governor General Julie Payette’s notorious remarks made at the opening ceremonies and which I highlighted in my November 3, 2017 posting about this year’s conference.

The proceedings are organized by day with links to individual pages for each session held that day. Here’s a sample of what is offered on Day 1: Artificial Intelligence and Discovery Science: Playing to Canada’s Strengths,

Artificial Intelligence and Discovery Science: Playing to Canada’s Strengths

Conference Day:
Day 1 – November 1st 2017

Organized by: Friends of the Canadian Institutes of Health Research

Keynote: Alan Bernstein, President and CEO, CIFAR, 2017 Henry G. Friesen International Prizewinner

Speakers: Brenda Andrews, Director, Andrew’s Lab, University of Toronto; Doina Precup, Associate Professor, McGill University; Dr Rémi Quirion, Chief Scientist of Quebec; Linda Rabeneck, Vice President, Prevention and Cancer Control, Cancer Care Ontario; Peter Zandstra, Director, School of Biomedical Engineering, University of British Columbia

Discussants: Henry Friesen, Professor Emeritus, University of Manitoba; Roderick McInnes, Acting President, Canadian Institutes of Health Research and Director, Lady Davis Institute, Jewish General Hospital, McGill University; Duncan J. Stewart, CEO and Scientific Director, Ottawa Hospital Research Institute; Vivek Goel, Vice President, Research and Innovation, University of Toronto

Moderators: Eric Meslin, President & CEO, Council of Canadian Academies; André Picard, Health Reporter and Columnist, The Globe and Mail

Takeaways and recommendations:

The opportunity for Canada

  • The potential impact of artificial intelligence (AI) could be as significant as the industrial revolution of the 19th century.
  • Canada’s global advantage in deep learning (a subset of machine learning) stems from the pioneering work of Geoffrey Hinton and early support from CIFAR and NSERC.
  • AI could mark a turning point in Canada’s innovation performance, fueled by the highest levels of venture capital financing in nearly a decade, and underpinned by publicly funded research at the federal, provincial and institutional levels.
  • The Canadian AI advantage can only be fully realized by developing and importing skilled talent, accessible markets, capital and companies willing to adopt new technologies into existing industries.
  • Canada leads in the combination of functional genomics and machine learning which is proving effective for predicting the functional variation in genomes.
  • AI promises advances in biomedical engineering by connecting chronic diseases – the largest health burden in Canada – to gene regulatory networks by understanding how stem cells make decisions.
  • AI can be effectively deployed to evaluate health and health systems in the general population.

The challenges

  • AI brings potential ethical and economic perils and requires a watchdog to oversee standards, engage in fact-based debate and prepare for the potential backlash over job losses to robots.
  • The ethical, environmental, economic, legal and social (GEL3S) aspects of genomics have been largely marginalized and it’s important not to make the same mistake with AI.
  • AI’s rapid scientific development makes it difficult to keep pace with safeguards and standards.
  • The fields of AI’s and pattern recognition are strongly connected but here is room for improvement.
  • Self-learning algorithms such as Alphaville could lead to the invention of new things that humans currently don’t know how to do. The field is developing rapidly, leading to some concern over the deployment of such systems.

Training future AI professionals

  • Young researchers must be given the oxygen to excel at AI if its potential is to be realized.
  • Students appreciate the breadth of training and additional resources they receive from researchers with ties to both academia and industry.
  • The importance of continuing fundamental research in AI is being challenged by companies such as Facebook, Google and Amazon which are hiring away key talent.
  • The explosion of AI is a powerful illustration of how the importance of fundamental research may only be recognized and exploited after 20 or 30 years. As a result, support for fundamental research, and the students working in areas related to AI, must continue.

A couple comments

To my knowledge, this is the first year the proceedings have been made so easily accessible. In fact, I can’t remember another year where they have been open access. Thank you!

Of course, I have to make a comment about the Day 2 session titled: Does Canada have a Science Culture? The answer is yes and it’s in the province of Ontario. Just take a look at the panel,

Organized by: Kirsten Vanstone, Royal Canadian Institute for Science and Reinhart Reithmeier, Professor, University of Toronto [in Ontario]

Speakers: Chantal Barriault, Director, Science Communication Graduate Program, Laurentian University [in Ontario] and Science North [in Ontario]; Maurice Bitran, CEO, Ontario Science Centre [take a wild guess as to where this institution is located?]; Kelly Bronson, Assistant Professor, Faculty of Social Sciences, University of Ottawa [in Ontario]; Marc LePage, President and CEO, Genome Canada [in Ontario]

Moderator: Ivan Semeniuk, Science Reporter, The Globe and Mail [in Ontario]

In fact, all of the institutions are in southern Ontario, even, the oddly named Science North.

I know from bitter experience it’s hard to put together panels but couldn’t someone from another province have participated?

Ah well, here’s hoping for 2018 and for a new location. After Ottawa as the CSPC site for three years in a row, please don’t make it a fourth year in a row.

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.