UNESCO in cooperation with Mila-Quebec Artificial Intelligence Institute [?], is launching a Call for Proposals to identify blind spots in AI Policy and Programme Development. The collective work will explore creative, novel and far-reaching approaches to tackling blind spots in AI.
All contributors are invited to answer the same question: what are the blind spots on which we must shed light in order for AI to benefit all?
Issues can address 1) blind spots in the development of AI as a technology 2) blind spots in the development of AI as a sector, and 3) blind spots in the development of public policies, global governance, and regulation for AI. There are no limits to the subjects to be addressed. These blind spots could include issues ranging from science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots. Proposals can be in creative formats, and the call for proposals is open to individuals from all academic backgrounds and sectors. Proposals from all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).
Call for proposals are open until 2 May 2021.
Selected proposals will be confirmed by 25 May.
Final proposals, if in written format, should be between 5000-7000 words and should be written in a style that is accessible to non-AI specialists and received by 1 September 2021.
To ensure inclusivity and a diversity of voices, for accepted contributions outside of academia, authors may request financial support available on a needs-based basis up to 1000 usd.
I really appreciate the breadth of the call with a range of blind spots such as “science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots” and, presumably, anything the convenors had not considered.
As well, they haven’t confined themselves to the ‘same old, same old’ contributors, “all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, videos, etc).”
I’m glad to see a refreshing approach being taken to a call for proposals. I wish them good luck.
The Québec connection
Mila (Montreal Institute for Learning Algorithms), UNESCO’s co-host for this call, was founded in 1993 according to its About Mila page,
Founded in 1993 by Professor Yoshua Bengio of the Université de Montréal, Mila is a research institute in artificial intelligence that rallies over 500 researchers specializing in the field of machine learning. Based in Montreal, Mila’s mission is to be a global pole for scientific advances that inspire innovation and the development of AI for the benefit of all.
Since 2017, [emphasis mine] Mila is the result of a partnership between the Université de Montréal and McGill University, closely linked with Polytechnique Montréal and HEC Montréal. Today, Mila gathers in its offices a vibrant community of professors, students, industrial partners and startups working in AI, making the institute the world’s largest academic research center in machine learning.
Mila, a non-profit organization, is internationally recognized for its significant contributions to machine learning, especially in the areas of language modelling, machine translation, object recognition and generative models.
Unmentioned, the Pan-Canadian Artificial Intelligence (AI) Strategy was created and funded by the Canadian federal government in 2017. One of the beneficiaries was Mila. (Odd how 2017 was the year Mila found so many academic partners in its home province.) From the Pan-Canadian AI strategy webpage on the Invest Canada website (Note: Links have been removed),
The artificial intelligence (AI) and machine learning revolution is well underway, and Canada is at its forefront. From top-ranked educational institutions and market-leading tech companies to world-renowned researchers, Canada’s AI ecosystems are leading global AI developments.
To continue to foster this growth and maintain its leadership position, Canada launched the $125M Pan-Canadian Artificial Intelligence Strategy in 2017—making it the first country to release a national AI strategy.
The Pan-Canadian AI Strategy is founded on a partnership between the Canadian Institute for Advanced Research (CIFAR) and the three centres of excellence: the Alberta Machine Intelligence Institute (AMII) in Edmonton, the Vector Institute in Toronto, and the Montreal Institute for Learning Algorithms (Mila) [all emphases mine] in Montreal. Together, they provide the support, resources, and talent for AI innovation and investment.
I don’t know where “Mila-Quebec Artificial Intelligence Institute” comes from. It’s not on their own website and I’ve never seen Mila called that anywhere other than on this UNESCO call.
Set against the backdrop of an ambiguous dystopia and eternal rave, LINK SICK is a tale about the threads that bind us together.
LINK SICK is DEBBY FRIDAY’S graduate thesis project – an audio-play written, directed and scored by the artist herself. The project is a science-fiction exploration of the connective tissue of human experience as well as an experiment in sound art; blurring the lines between theatre, radio, music, fiction, essay, and internet art. Over 42-minutes, listeners are invited to gather round, close their eyes, and open their ears; submerging straight into a strange future peppered with blink-streams, automated protests, disembodied DJs, dancefloor orgies, and only the trendiest S/S 221 G-E two-piece club skins.
DEBBY FRIDAY as Izzi/Narrator Chino Amobi as Philo Sam Rolfes as Dj GODLESS Hanna Sam as ABC Inc. Announcer Storm Greenwood as Diana Deviance Alex Zhang Hungtai as Weaver Allie Stephen as Numee Soukayna as Katz AI Voice Generated Protesters via Replica Studios
Presented in partial fulfillment of the requirements of the Degree of Master of Fine Arts in the School for the Contemporary Arts at Simon Fraser University.
No time is listed but I’m assuming FRIDAY is operating on PDT, so, you might want to take that into account when checking.
FRIDAY seems to favour full caps for her name and everywhere on her eponymous website (from her ABOUT page),
DEBBY FRIDAY is an experimentalist.
Born in Nigeria, raised in Montreal, and now based in Vancouver, DEBBY FRIDAY’s work spans the spectrum of the audio-visual, resisting categorizations of genre and artistic discipline. She is at once sound theorist and musician, performer and poet, filmmaker and PUNK GOD. …
Should you wish to support the artist financially, she offers merchandise.
Getting back to the play, I look forward to the auditory experience. Given how much we are expected to watch and the dominance of images, creating a piece that requires listening is an interesting choice.
These workshops will inform recommendations to the Government of Canada on how to boost public awareness of and foster trust in AI. The conversations will be grounded in an understanding of the technology, its potential uses, and its associated risks.
Each workshop is approximately 2.5 hrs in length and free to attend. Our goal is to engage more than 1,000 people across Canada, building on the results of a national survey that was conducted in December 2020.
What to expect
Opening plenary session (15 min)
Breakout session with 6-10 participants
BREAK (10 minutes)
Recommendations (40 min)
Closing remarks ( 8 min)
Closing plenary Session (22 min)
Oddly, there’s isn’t a registration link from the event page, you have to click on one of two (Regional or Youth) workshop tabs at the top of the page (this is from the Regional Workshops webpage),
Join us for a virtual workshop taking place in your region. Each workshop will include facilitated discussions based on Artificial Intelligence (AI) scenarios and provide an opportunity to share your views on AI.
To register by phone, please call Grace at 416-971-6937. If you require accommodations to participate, please contact firstname.lastname@example.org.
The regions are split into the West (Pacific and Mountain time zones), Central (Central and Ontario time zones), and East (Newfoundland, Atlantic and Quebec time zones). There are French and English sessions in each of the three regions and they have included the North on the regional maps.
Sadly, the events team at CIFAR did not answer questions (I tried twice) nor did Julian Posada who is apparently the facilitator for the workshops,
The Government of Canada’s Advisory Council on Artificial Intelligence Public Awareness Working Group includes representatives from: AI Global | AI Network of BC | Amii | Brookfield Institute | Canadian Chamber of Commerce | CIFAR | DeepSense/Dalhousie | Glassbox | Ivado | Kids Code Jeunesse | Let’s Talk Science | Mila | Saskinteractive | Université de Montréal
The partners, represented by logos, are the Government of Canada (as in Advisory Committee?), Algora Lab, Université de Montréal, CIFAR, and for the Youth Workshops, Let’s Talk Science, Kids Code Jeunesse, and workshop materials are being provided by the Canadian Commission for UNESCO (United Nations Educational, Scientific and Cultural Organization).
By the third time, I’d reworded a few things and added one or two question so, here’s the final list as sent to Julian Posada on Thursday, March 18, 2021,
(1) I understand it’s a joint CIFAR/Government of Canada Advisory Council on Artificial Intelligence Public Awareness Working Group workshop series called Open Dialogue: Artificial Intelligence (AI) in Canada, is that correct and the series will be held from March 30 – April 30, 2021?
(2) Are regular folks invited to join in or is this primarily for academics, business people, entrepreneurs, AI researchers, and other cognoscenti?
(3) Will a distinction be made between AI and robots?
(4) Are you facilitating all of the planned workshops? Will you also have assigned leaders for the breakout groups or will that be decided amongst the participants? If leaders are assigned, who are they?
(5) What do you have planned for your workshop(s)? e.g., Will participants be presented with various scenarios for discussion in the breakout groups? Or will participants be given specific topics to discuss, such as AI in the military? AI in senior’s facilities (e.g., social or companion robots for seniors? etc.
(6) Are the workshops being conducted over Zoom and is a Zoom account required for participation? Is there an alternative technology being used?
(7) Will AI be used to review and analyze the sessions and data gathered?
(8) Are there security measures in place for the session and for the data, specifically, participants’ personal data given up during registration?
(9) Will participants get a copy of the report afterwards or notified when it’s made available?
Since the workshops start on March 30, 2021 and I’m sure everyone’s busy and not able to spare time for questions, I’ve elected to publish what i can about the workshops despite a few misgivings.
I’m glad to see this initiative and to note that the North is included. It would be interesting to learn how these workshops have been publicized (I stumbled across them in a retweet of Julian Posada’s announcement on my Twitter feed). However, it’s not vital.
Priorities for the Advisory Council on Artificial Intelligence
Artificial intelligence (AI) represents a set of complex and powerful technologies that will touch or transform every sector and industry. It has the power to help us address some of our most challenging problems in areas like health and the environment, and to introduce new sources of sustainable economic growth. As a digital nation, Canada is taking steps to harness the potential of AI.
As announced by the Minister of Innovation, Science and Economic Development on May 14, 2019, the Advisory Council on Artificial Intelligence will advise the Government of Canada on building Canada’s strengths and global leadership in AI, identifying opportunities to create economic growth that benefits all Canadians, and ensuring that AI advancements reflect Canadian’ values. The Advisory Council will be a central reference point to draw on leading AI experts from Canadian industry, civil society, academia, and government.
Public Awareness Working Group
Recognizing the importance of a two-way dialogue with the Canadian public on AI, the Advisory Council launched a working group dedicated to public awareness in 2020. The Public Awareness Working Group is looking at mechanisms to boost public awareness and foster trust in AI. It also aims to ground the Canadian discussion in a measured understanding of AI technology, its potential uses, and its associated risks.
Commercialization Working Group
Recognizing that Canada has an imperative to commercialize its AI, and to capitalize on existing Canadian advantages in research and talent, the Advisory Council launched a working group dedicated to commercialization in August 2019 [emphasis mine]. The Commercialization Working Group explored ways to translate Canadian-owned artificial intelligence into economic growth that includes higher business productivity and benefits for Canadians.
The first order of business was commercialization in August 2019 and that’s to be expected given that this is ISED. The Public Awareness Working Group was launched at least four months after.
Is awareness a dialogue?
As they very nicely note on the CIFAR AI dialogue event page, these workshops are going to help the government figure out “how to boost public awareness of and foster trust in AI.” It’s very flattering to be consulted this way.
So to sum this up, the ‘dialogue’ in the regional and youth workshops will be mined for ideas on how to boost public awareness and foster trust. You’re not really just getting an opportunity “to share your views on AI,” are you?
It seems a bit narrow but then they’ve already conducted a survey in December 2020, which has in all likelihood informed the content for these workshops and they have . Plus the workshop materials being provided by the Canadian Commission for UNESCO have in all likelihood been used elsewhere and repackaged for the Canadian market.
Hmmm I wouldn’t call this an ‘open dialogue’ since so much has already been done to frame it.
Many years ago I read a fascinating article about Temple Grandin and her work redesigning abattoirs (slaughterhouses) to make them more humane. I don’t remember much about it but calming the cattle by dampening the noise while distracting them a little by making them move around rather then directly leading them to their deaths seemed the key elements to the redesign.
This ‘open dialogue’ reminds me of the article. The outcome is predetermined and we’re being distracted in the nicest way possible.
Mining the data?
Nine workshop sessions in total with one hour and 40 minutes (rough estimate) of discussion and recommendations for each session. That’s roughly 15 hours of material from the dialogues and recommendations to analyze.
Remember this question “(7) Will AI be used to review and analyze the sessions and data gathered?”
It’s hard to believe that CIFAR and its partners don’t have a system that could do the job or, at the very least, a system that could learn from the sessions.
Not necessarily evil
While I have a number of misgivings about these ‘dialogues’, I don’t expect that most of the people involved are trying to be nefarious. There are probably some good intentions (you know where those take you, yes?) but the overarching purpose here is commercialization which is made much easier with universal acceptance. (awareness + trust)
To be blunt, a dialogue with a predetermined outcome seems more like a script to me than an open conversation.
This sort of thing has been called a ‘public consultation’ but that term has gotten a bad reputation as it was used to disguise the kind of manipulation that I suspect is going on with this effort.
How they expect to foster trust in circumstances that are not conducive to that is a bit of a mystery to me. Plus, I have to wonder if these organizers or committee members have taken into the possible aftereffects of one of the great Canadian government debacles.
The Phoenix pay system is a payroll processing system for Canadian federal government employees, provided by IBM in June 2011 using PeopleSoft software, and run by Public Services and Procurement Canada. The Public Service Pay Centre is located in Miramichi, New Brunswick. It was first introduced in 2009 as part of Prime Minister Stephen Harper’s Transformation of Pay Administration Initiative, intended to replace Canada’s 40-year old system with a new, cost-saving “automated, off-the-shelf commercial system.” By July 2018, Phoenix has caused pay problems to close to 80 percent of the federal government’s 290,000 public servants through underpayments, over-payments, and non-payments. The Standing Senate Committee on National Finance, chaired by Senator Percy Mockler, investigated the Phoenix Pay system and submitted their report, “The Phoenix Pay Problem: Working Towards a Solution” on July 31, 2018, in which they called Phoenix a failure and an “international embarrassment”. Instead of saving $70 million a year as planned, the report said that the cost to taxpayers to fix Phoenix’s problems could reach a total of $2.2 billion by 2023. [emphasis mine]
The entry leaves out a couple of details. Yes, Harper’s government nurtured this disaster but it was (1) Prime Minister Justin Trudeau and his (2) Liberal government who implemented the system in February 2016. Whoever wrote this entry is very friendly to the Liberals so I don’t think the politicians were quite as uninformed as represented in the entry.
As for the cost to taxpayers, I think $2.2 billion by 2023 is an over modest estimate. For comparison, Australia’s Queensland Health Authority also had a pay system debacle. It was the same vendor (IBM) and, in 2013, the estimate to fix the problems was $1.2 billion Australian dollars (see this Dec.11.13 article by Robert N. Charette for the IEEE Spectrum or this Aug.7.13 article by Michael Madigan, Sarah Vogler, and Greg Stolz for The Courier Mail).
Note 1: I checked on a currency converter today (March 23, 2021) and $1 CAD = $1.04 AUS.
Note 2: For anyone unfamiliar with the organization, IEEE is the Institute of Electrical and Electronics Engineers.
I’m pretty sure $2.2 billion (which I think is an underestimate) does not include the human costs (anxiety, alcohol abuse, self-harm, suicide, etc.).
The situation was exacerbated as Catharine Tunney wrote in a February 18, 2020 article for CBC (Canadian Broadcasting Corporation) online (Note: A link has been removed),
More than 69,000 public servants caught up in the Phoenix pay system debacle are now victims of a privacy breach after their personal information was accidentally emailed to the wrong people, says Public Services and Procurement Canada.
The problem-plagued electronic payroll system has improperly paid tens of thousands of public servants since its launch in 2016. Some employees have gone months with little or no pay, while others have been overpaid, sometimes for months at a time.
Earlier this month, a report naming 69,087 public servants was accidentally emailed to the wrong federal departments.
The report included the employees’ full names, their personal record identifier numbers, home addresses and overpayment amounts.
More than 161 chief financial officers and 62 heads of HR in 62 departments received the report in error, according to a statement posted to Public Services and Procurement Canada’s website on Monday.
Public Services and Procurement Canada isn’t the only department to accidentally breach the confidentiality of workers’ personal information.
According to figures recently tabled in the House of Commons, federal departments or agencies mishandled personal information belonging to 144,000 Canadians over the past two years.
Privacy Commissioner Daniel Therrien has long called out “strong indications of systemic under-reporting” of privacy breaches across government.
Overhauling the government payroll system is not the same as introducing new artificial intelligence systems but the problem is that many of the same people in the upper echelons of Canada’s civil service (government employees) were and are instrumental in the deployment of these systems.
“Phoenix pay system an ‘incomprehensible failure,’ Auditor-General says” was the headline for a May 29, 2018 article by Michelle Zilio for the Globe and Mail. I might feel more trust if after the report, there’d been signs that things had changed. However, the government is still highly secretive and we have a ‘dialogue’ with a predetermined outcome (just like the public consultations of yesteryear).
As for M. Posada, the facilitator for one or more of the workshops, he seems relatively new to Canada (scroll down his University of Toronto profile page and click on Degrees),
M.A., Economic Sociology – School for Advanced Studies in the Social Sciences (EHESS) [École des hautes études en sciences sociales in Paris, France]
B.A., Humanities – Sorbonne University [also in Paris]
As I noted in my December 10, 2021 posting where a chapter on science communication in Canada where two of the three authors were from other countries (Brazil and Australia), outsider perspectives can be quite valuable. (Both of the authors spent some time in Canada. At least one of them had taught here.)
In any event, I have to wonder how well he’s been briefed.
After my experience in something called “participatory budgeting” (City of Vancouver, 2019), where citizens were asked come together and decide how to spend $100,000 of the city budget in our neighbourhood, A surprising number of city employees were involved as ‘members’ of the working groups and ,of course, other employees at City Hall had veto power over what was eventually presented to the community for voting. I can say that at the end of the process I felt used.
What a Monday morning! United Nations Educational, Scientific and Cultural Organization (UNESCO; French: Organisation des Nations unies pour l’éducation, la science et la culture) and the World Economic Forum (WEF) hosted a live webcast (which started at 6 am PST or 1500 CET [3 pm in Paris, France]). The session is available online for viewing both here on UNESCO’s Girl Trouble webpage and here on YouTube. It’s about 2.5 hours long with two separate discussions and a question period after each discussion. You will have a 2 minute wait before seeing any speakers or panelists.
UNESCO and the World Economic Forum present Girl Trouble: Breaking Through The Bias in AI on International Women’s Day, 8th March, 3:00 pm – 5:30 pm (CET). This timely round-table brings together a range of leading female voices in tech to confront the deep-rooted gender imbalances skewing the development of artificial intelligence. Today critics charge that AI feeds on biased data-sets, amplifying the existing the anti-female biases of our societies, and that AI is perpetuating harmful stereotypes of women as submissive and subservient. Is it any wonder when only 22% of AI professionals globally are women?
Our panelists are female change-makers in AI. From C-suite professionals taking decisions which affect us all, to women innovating new AI tools and policies to help vulnerable groups, to those courageously exposing injustice and algorithmic biases, we welcome:
Gabriela Ramos, Assistant Director-General of Social and Human Sciences, UNESCO, leading the development of UNESCO’s Recommendation on the Ethics of AI, the first global standard-setting instrument in the field. Kay Firth-Butterfield, Keynote speaker. Kay was the world’s first chief AI Ethics Officer. As Head of AI & Machine Learning, and a Member of the Executive Committee of the World Economic Forum, Kay develops new alliances to promote awareness of gender bias in AI; Ashwini Asokan, CEO of Chennai-based AI company, Mad Street Den. She explores how Artificial Intelligence can be applied meaningfully and made accessible to billions across the globe; Adriana Bora a researcher using machine learning to boost compliance with the UK and Australian Modern Slavery Acts, and to combat modern slavery, including the trafficking of women; Anne Bioulac, a member of the Women in Africa Initiative, developing AI-enabled online learning to empower African women to use AI in digital entrepreneurship; Meredith Broussard, a software developer and associate professor of data journalism at New York University, whose research focuses on AI in investigative reporting, with a particular interest in using data analysis for social good ; Latifa Mohammed Al-AbdulKarim, named by Forbes magazine as one of 100 Brilliant Women in AI Ethics, and as one of the women defining AI in the 21st century; Wanda Munoz, of the Latin American Human Security Network. One of the Nobel Women’s Initiative’s 2020 peacebuilders, she raises aware-ness around gender-based violence and autonomous weapons; Nanjira Sambuli, a Member of the UN Secretary General’s High-Level Panel for Digital Cooperation and Advisor for the A+ Alliance for Inclusive Algorithms; Jutta Williams, Product Manager at Twitter, analyzing how Twitter can improve its models to reduce bias.
There’s an urgent need for more women to participate in and lead the design, development, and deployment of AI systems. Evidence shows that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias.
AI Recruiters searching for female AI specialists online just cannot find them. Companies hiring experts for AI and data science jobs estimate fewer than 1 per cent of the applications they receive come from women. Women and girls are 4 times less likely to know how to programme computers, and 13 times less likely to file for technology patent. They are also less likely to occupy leadership positions in tech companies.
1. The 4th industrial revolution is on our doorstop, and gender equality risks being set back decades; What more can we do to attract more women to design jobs in AI, and to support them to take their seats on the boards of tech companies.
2. How can AI help us advance women and girls’ rights in society? And how can we solve the problem of algorithmic gender bias in AI systems?
Women’s leadership in the AI Sector at all levels, from big tech to the start-up AI economy in developing countries will be placed under the micro-scope.
Confession: I set the timer correctly but then forget to set the alarm so I watched the last 1.5 hours (I plan to go back and get the first hour later). Here’s a little of what transpired.
Kudos to the moderator, Natashya Gutierrez, for her excellent performance; it can’t have been easy to keep track of the panelists and questions for a period of 2.5 hours,
Natashya Gutierrez, Editor-in-Chief APAC, VICE World News
Natashya is an award-winning multimedia journalist and current Editor in Chief of VICE World News in APAC [Asia-Pacific Countries]. She oversees editorial teams across Australia, Indonesia, India, Hong Kong, Thailand, the Philippines, Singapore, Japan and Korea. Natashya’s reporting specialises on women’s rights. At VICE, she hosts Unequal, a series focused on gender inequality in Asia. She is the recipient of several journalism awards including the Society of Publishers in Asia for reporting on women’s issues, and the Asia Journalism Fellowship. Before VICE, she was part of the founding team of Rappler, an online news network based in the Philippines. She has been selected as one of Asia’s emerging young leaders and named a Development Fellow by the Asia Foundation. Natashya is a graduate of Yale University.
First panel discussion
For anyone who’s going to watch the session, don’t forget it takes about two minutes before there’s sound. The first panel was focused on “the female training and recruitment crisis in AI.’
The right people
I have a suspicion that Ashwini Asokan’s comment about getting the ‘right people’ to create the algorithms and make decisions about AI was not meant the way it might sound. I will have to listen again but, at a guess, I think she was suggesting that a bunch of 25 – 35 year old developers (mostly male and working in monoculture environments) is not going to be cognizant of how their mathematical decisions will impact real world lives.
So, getting the ‘right people’ means more inclusive hiring.
Is AI always the best solution?
In all the talk about AI, it’s assumed that this technology is the best solution to all problems. One of the panelists (Nanjira Sambuli) suggested an analogue solution (e. g., a book) might be a better solution on occasion.
There are some things that people are better at than AI (can’t remember which panelist said this). That comment hints at something which seems heretical. It challenges the notion that technology is always better than a person.
I once had someone at a bank explain to me that computers were very smart (by implication, smarter than me)—30 years ago The teller was talking about a database.
Adriana Bora (I think) suggested that lived experience should be considered when putting together consultative groups and developer groups.
This theme of AI not being the best solution for all problems came up again in the second panel discussion
Second panel discussion
The second panel was focused on “innovative AI-based solutions to address bias against women.”
AI is math and it’s hard
It’s surprisingly easy to forget that AI is math. Meredith Broussard pointed out that most of us (around the world) have a very Hollywood idea about what AI is.
Broussard noted that AI has its limits and there are times when it’s not the right choice.
She made an interesting point in her comment about AI being hard. I don’t think she meant to echo the old cliché ‘math is hard, so it’s not for girls’. The comment seemed to speak to the breadth and depth of the AI sector. Simultaneous with challenging mathematics, we need to take into account so much more than was imagined in the Industrial Revolution when ecological consequences were unimagined and inequities often taken as god-given.
Inequities and language
Natashya Gutierrez, the moderator, noted that AI doesn’t create bias, it magnifies it.
One of the panelists, Jutta Williams (Twitter), noted later that algorithms are designed to favour certain types of language, e. g., information presented as factual rather than emotive. That’s how you get more attention on social media platforms. In essence, the bias in the algorithms was not towards males but towards the way they tend to communicate.
Describing engineers as ‘lazy’, Meredith Broussard added this about the mindset, ‘write once, run anywhere’.
A colleague, some years ago, drew my attention to the problem. She was unsuccessfully trying to get the developers to fix a problem in the code. They simply couldn’t be bothered. It wasn’t an interesting problem and there was no reward for fixing it.
I’m having a problem now where I suspect engineers/developers don’t want to tweak or correct code in WordPress. It’s the software I use to create my blog postings and I use tags to make those postings easier to find.
Sometime in December 2018 I updated my blog software to their latest version. Many problems ensued but there is one which persists to this day. I can’t tag any new words with apostrophes in them (very common in French). The system refuses to save them.
Previous versions of WordPress were quite capable of saving words with apostrophes. Those words are still in my ‘tag database’.
Older generation has less tech savvy
Adriana Bora suggested that the older generation should also be considered in discussions about AI and inclusivity. I’m glad to hear her mention.
Unfortunately, she seemed to be under the impression that seniors don’t know much about technology.
Yes and no. Who do you think built and developed the technologies you are currently using? Probably your parents and grandparents. Networks were first developed in the early to mid-1960s. The Internet is approximately 40 years old. (You can get the details in the History of the Internet entry on Wikipedia.)
Yes, I’ve made that mistake about seniors/elders too.
It’s possible that person over … what age is that? Over 55? Over 60? Over 65? Over 75? and so on … Anyway, that person may not have had much experience with the digital world or it may be dated experience but that assumption is problematic.
As an antidote, here’s one of my favourite blogs, Grandma Got STEM. It’s mostly written by people reminiscing about their STEM mothers and grandmothers.
Bits and bobs
There seemed to be general agreement that there needs to be more transparency about the development of AI and what happens in the ‘AI black box’.
Gabriela Ramos, keynote speaker, commented that transparency needs to be paired up with choice otherwise it won’t do much good.
After recounting a distressing story about how activists have had their personal revealed in various networks, Wanda Munoz noted that AI can be used for good.
The concerns are not theoretical and my final comments
Munoz, of course, brought a real life example of bad things happening but I’d like to reinforce it with one more example. The British Broadcasting Corporation (BBC) in a January 13, 2021 news article by Leo Kelion broke the news that Huawei, a Chinese technology company, had technology that could identify ethnic groups (Note: Links have been removed),
A Huawei patent has been brought to light for a system that identifies people who appear to be of Uighur origin among images of pedestrians.
The filing is one of several of its kind involving leading Chinese technology companies, discovered by a US research company and shared with BBC News.
Huawei had previously said none of its technologies was designed to identify ethnic groups.
It now plans to alter the patent.
The company indicated this would involve asking the China National Intellectual Property Administration (CNIPA) – the country’s patent authority – for permission to delete the reference to Uighurs in the Chinese-language document.
Uighur people belong to a mostly Muslim ethnic group that lives mainly in Xinjiang province, in north-western China.
Government authorities are accused of using high-tech surveillance against them and detaining many in forced-labour camps, where children are sometimes separated from their parents.
Beijing says the camps offer voluntary education and training.
Huawei’s patent was originally filed in July 2018, in conjunction with the Chinese Academy of Sciences .
It describes ways to use deep-learning artificial-intelligence techniques to identify various features of pedestrians photographed or filmed in the street.
But the document also lists attributes by which a person might be targeted, which it says can include “race (Han [China’s biggest ethnic group], Uighur)”.
Topic: An evening salon and reading of specially commissioned pieces of fiction on AI futures
Description: Artificial intelligence and data-driven technologies permeate all aspects of our lives. Their proliferation increasingly leads to encounters with ‘mutant algorithms’, ‘biased machine learning’, and ‘racist AIs’ that sometimes make familiar forms of near-future fiction pale in comparison.
In these examples, AI and machine learning tools inscribe a certain future based on predictions from past observations and they foreclose a multitude of other possible futures.
Faced with this potential to limit and constrain what might be, can fiction and narrative offer alternatives for how AI could and should be?
This evening salon will present near-future fiction pieces commissioned by the Ada Lovelace Institute’s JUST AI project to inspire and expand our thinking about our possible relationship to AI and data.
Join the event to listen to the first reading of two commissioned pieces and to discuss with the authors and invited experts.
Live (real-time) captioning will be provided for this event, if you have questions or request for access, please contact: email@example.com.
Chair: – Alison Powell, Associate Professor, London School of Economics
Speakers: – Adam Marek – writer of futuristic and fantastical short stories – Squirrel Nation – reimagining and designing how to live in a warming world – Tania Hershman – poet, writer, teacher and editor – Yasemin J. Erden, Assistant Professor in Philosophy, University of Twente
Time: Mar 3, 2021 06:30 PM – 8 PM [GMT]
This artwork accompanying the Almost future AI announcement reminds me of a circuit board. In any event, I found this image and a bit more information about the Just AI programme/network and about their event on this Almost future AI webpage,
The JUST AI (Joining Up Society and Technology in AI) programme is an independent network of researchers and practitioners, led by Dr Alison Powell from LSE [London School of Economics], supported by the UK’s Arts and Humanities Research Council (AHRC) and the Ada Lovelace Institute. The humanities-led network is committed to understanding the social and ethical value of data-driven technologies, artificial intelligence, and automated systems. The network will build on research in AI ethics, orienting it around practical issues of social justice, distribution, governance and design, and seek to inform the development of policy and practice.
We are using Zoom for virtual events open to more than 40 attendees. Although there are issues with Zoom’s privacy controls, when reviewing available solutions we found that there isn’t a perfect product and we have chosen Zoom for its usability and accessibility. Find out more here.
I’m glad to see they’ve taken privacy concerns seriously enough to explain why they’re using Zoom. I wish more organizations took the time to inform participants in virtual and online events which technology is being used and to include a reference to or comment on privacy issues.
it’s a relief to see this level of congruence between Just AI’s and the Ada Lovelace Institute’s stated principles and its preliminary actions.
Before moving onto the next item and due to a very confused approach to naming (Ada Lovelace Day being both a ‘day’ and an organization), it seems like a good idea to mention that the Ada Lovelace Institute is not associated with the Ada Lovelace Day organization as per the Ada Lovelace Institute’s About webpage,
The Ada Lovelace Institute was established by the Nuffield Foundation in early 2018, in collaboration with the Alan Turing Institute, the Royal Society, the British Academy, the Royal Statistical Society, the Wellcome Trust, Luminate, techUK and the Nuffield Council on Bioethics.
One more March 2021 event
Staying on the Ada Lovelace theme, there’s an event on March 8, 2021 International Women’s Day being hosted by the organization called Ada Lovelace Day (there’s more confusion to come). Here’s more about the upcoming March 2021 event from the 2021 International Women’s Day event webpage,
Monday 8 March 2021 [1900 GMT]
We are celebrating International Women’s Day with an hour long live-streamed panel discussion titled Comedy and Communication, looking at how we can all use comedy techniques in our STEM communications and teaching.
The Ada Lovelace Day organization is at findingada.com, which is also the name for one of the organization’s initiatives, the ‘Finding Ada Network’. I find the naming conventions confusing, especially since there is an Ada Lovelace Day celebrated internationally and hosted by this organization (whatever it’s called) each year. In 2021, Ada Lovelace Day will be celebrated on Tuesday, October 12.
I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).
29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021
This first excerpt features a conference that’s currently taking place,,
Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]
The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.
The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.
Two calls for papers
These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,
2nd Conference on AI Music Creativity (MuMe + CSMC)
The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.
The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.
The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.
AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:
SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,
MMM : Multi-Track Music Machine
One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.
Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.
To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.
Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,
ISEA2020 Recap // Why Sentience?
As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.
The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.
The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.
Amsterdam, NL – The use of algorithms in government is transforming the way bureaucrats work and make decisions in different areas, such as healthcare or criminal justice. Experts address the transparency challenges of using algorithms in decision-making procedures at the macro-, meso-, and micro-levels in this special issue of Information Polity.
Machine-learning algorithms hold huge potential to make government services fairer and more effective and have the potential of “freeing” decision-making from human subjectivity, according to recent research. Algorithms are used in many public service contexts. For example, within the legal system it has been demonstrated that algorithms can predict recidivism better than criminal court judges. At the same time, critics highlight several dangers of algorithmic decision-making, such as racial bias and lack of transparency.
Some scholars have argued that the introduction of algorithms in decision-making procedures may cause profound shifts in the way bureaucrats make decisions and that algorithms may affect broader organizational routines and structures. This special issue on algorithm transparency presents six contributions to sharpen our conceptual and empirical understanding of the use of algorithms in government.
“There has been a surge in criticism towards the ‘black box’ of algorithmic decision-making in government,” explain Guest Editors Sarah Giest (Leiden University) and Stephan Grimmelikhuijsen (Utrecht University). “In this special issue collection, we show that it is not enough to unpack the technical details of algorithms, but also look at institutional, organizational, and individual context within which these algorithms operate to truly understand how we can achieve transparent and responsible algorithms in government. For example, regulations may enable transparency mechanisms, yet organizations create new policies on how algorithms should be used, and individual public servants create new professional repertoires. All these levels interact and affect algorithmic transparency in public organizations.”
The transparency challenges for the use of algorithms transcend different levels of government – from European level to individual public bureaucrats. These challenges can also take different forms; transparency can be enabled or limited by technical tools as well as regulatory guidelines or organizational policies. Articles in this issue address transparency challenges of algorithm use at the macro-, meso-, and micro-level. The macro level describes phenomena from an institutional perspective – which national systems, regulations and cultures play a role in algorithmic decision-making. The meso-level primarily pays attention to the organizational and team level, while the micro-level focuses on individual attributes, such as beliefs, motivation, interactions, and behaviors.
“Calls to ‘keep humans in the loop’ may be moot points if we fail to understand how algorithms impact human decision-making and how algorithmic design impacts the practical possibilities for transparency and human discretion,” notes Rik Peeters, research professor of Public Administration at the Centre for Research and Teaching in Economics (CIDE) in Mexico City. In a review of recent academic literature on the micro-level dynamics of algorithmic systems, he discusses three design variables that determine the preconditions for human transparency and discretion and identifies four main sources of variation in “human-algorithm interaction.”
The article draws two major conclusions: First, human agents are rarely fully “out of the loop,” and levels of oversight and override designed into algorithms should be understood as a continuum. The second pertains to bounded rationality, satisficing behavior, automation bias, and frontline coping mechanisms that play a crucial role in the way humans use algorithms in decision-making processes.
For future research Dr. Peeters suggests taking a closer look at the behavioral mechanisms in combination with identifying relevant skills of bureaucrats in dealing with algorithms. “Without a basic understanding of the algorithms that screen- and street-level bureaucrats have to work with, it is difficult to imagine how they can properly use their discretion and critically assess algorithmic procedures and outcomes. Professionals should have sufficient training to supervise the algorithms with which they are working.”
At the macro-level, algorithms can be an important tool for enabling institutional transparency, writes Alex Ingrams, PhD, Governance and Global Affairs, Institute of Public Administration, Leiden University, Leiden, The Netherlands. This study evaluates a machine-learning approach to open public comments for policymaking to increase institutional transparency of public commenting in a law-making process in the United States. The article applies an unsupervised machine learning analysis of thousands of public comments submitted to the United States Transport Security Administration on a 2013 proposed regulation for the use of new full body imaging scanners in airports. The algorithm highlights salient topic clusters in the public comments that could help policymakers understand open public comments processes. “Algorithms should not only be subject to transparency but can also be used as tool for transparency in government decision-making,” comments Dr. Ingrams.
“Regulatory certainty in combination with organizational and managerial capacity will drive the way the technology is developed and used and what transparency mechanisms are in place for each step,” note the Guest Editors. “On its own these are larger issues to tackle in terms of developing and passing laws or providing training and guidance for public managers and bureaucrats. The fact that they are linked further complicates this process. Highlighting these linkages is a first step towards seeing the bigger picture of why transparency mechanisms are put in place in some scenarios and not in others and opens the door to comparative analyses for future research and new insights for policymakers. To advocate the responsible and transparent use of algorithms, future research should look into the interplay between micro-, meso-, and macro-level dynamics.”
“We are proud to present this special issue, the 100th issue of Information Polity. Its focus on the governance of AI demonstrates our continued desire to tackle contemporary issues in eGovernment and the importance of showcasing excellent research and the insights offered by information polity perspectives,” add Professor Albert Meijer (Utrecht University) and Professor William Webster (University of Stirling), Editors-in-Chief.
This image illustrates the interplay between the various level dynamics,
Here’s a link, to and a citation for the special issue,
An AI governance publication from the US’s Wilson Center
Within one week of the release of a special issue of Information Polity on AI and governments, a Wilson Center (Woodrow Wilson International Center for Scholars) December 21, 2020 news release (received via email) announces a new publication,
Governing AI: Understanding the Limits, Possibilities, and Risks of AI in an Era of Intelligent Tools and Systems by John Zysman & Mark Nitzberg
In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well- specified problem domains, still lacking the context sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision- makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is about managing those who create and deploy AI systems, and supporting the safe and beneficial application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows:
AI applications are part of a suite of intelligent tools and systems and must ultimately be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem.
Simply declaring objectives—be they assuring digital privacy and transparency, or avoiding bias—is not sufficient. We must decide what the goals actually will be in operational terms.
The issues and choices will differ by sector. For example, the consequences of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales.
The application of AI tools in public policy decision making, in transportation design or waste disposal or policing among a whole variety of domains, requires great care. There is a substantial risk of focusing on efficiency when the public debate about what the goals should be in the first place is in fact required. Indeed, public values evolve as part of social and political conflict.
The economic implications of AI applications are easily exaggerated. Should public investment concentrate on advancing basic research or on diffusing the tools, user interfaces, and training needed to implement them?
As difficult as it will be to decide on goals and a strategy to implement the goals of one community, let alone regional or international communities, any agreement that goes beyond simple objective statements is very unlikely.
However, I have found a draft version of the report (Working Paper) published August 26, 2020 on the Social Science Research Network. This paper originated at the University of California at Berkeley as part of a series from the Berkeley Roundtable on the International Economy (BRIE). ‘Governing AI: Understanding the Limits, Possibility, and Risks of AI in an Era of Intelligent Tools and Systems’ is also known as the BRIE Working Paper 2020-5.
Canadian government and AI
The special issue on AI and governance and the the paper published by the Wilson Center stimulated my interest in the Canadian government’s approach to governance, responsibility, transparency, and AI.
There is information out there but it’s scattered across various government initiatives and ministries. Above all, it is not easy to find, open communication. Whether that’s by design or the blindness and/or ineptitude to be found in all organizations I leave that to wiser judges. (I’ve worked in small companies and they too have the problem. In colloquial terms, ‘the right hand doesn’t know what the left hand is doing’.)
Responsible use? Maybe not after 2019
First there’s a government of Canada webpage, Responsible use of artificial intelligence (AI). Other than a note at the bottom of the page “Date modified: 2020-07-28,” all of the information dates from 2016 up to March 2019 (which you’ll find on ‘Our Timeline’). Is nothing new happening?
For anyone interested in responsible use, there are two sections “Our guiding principles” and “Directive on Automated Decision-Making” that answer some questions. I found the ‘Directive’ to be more informative with its definitions, objectives, and, even, consequences. Sadly, you need to keep clicking to find consequences and you’ll end up on The Framework for the Management of Compliance. Interestingly, deputy heads are assumed in charge of managing non-compliance. I wonder how employees deal with a non-compliant deputy head?
What about the government’s digital service?
You might think Canadian Digital Service (CDS) might also have some information about responsible use. CDS was launched in 2017, according to Luke Simon’s July 19, 2017 article on Medium,
In case you missed it, there was some exciting digital government news in Canada Tuesday. The Canadian Digital Service (CDS) launched, meaning Canada has joined other nations, including the US and the UK, that have a federal department dedicated to digital.
At the time, Simon was Director of Outreach at Code for Canada.
Presumably, CDS, from an organizational perspective, is somehow attached to the Minister of Digital Government (it’s a position with virtually no governmental infrastructure as opposed to the Minister of Innovation, Science and Economic Development who is responsible for many departments and agencies). The current minister is Joyce Murray whose government profile offers almost no information about her work on digital services. Perhaps there’s a more informative profile of the Minister of Digital Government somewhere on a government website.
Meanwhile, they are friendly folks at CDS but they don’t offer much substantive information. From the CDS homepage,
Our aim is to make services easier for government to deliver. We collaborate with people who work in government to address service delivery problems. We test with people who need government services to find design solutions that are easy to use.
At the Canadian Digital Service (CDS), we partner up with federal departments to design, test and build simple, easy to use services. Our goal is to improve the experience – for people who deliver government services and people who use those services.
How it works
We work with our partners in the open, regularly sharing progress via public platforms. This creates a culture of learning and fosters best practices. It means non-partner departments can apply our work and use our resources to develop their own services.
Together, we form a team that follows the ‘Agile software development methodology’. This means we begin with an intensive ‘Discovery’ research phase to explore user needs and possible solutions to meeting those needs. After that, we move into a prototyping ‘Alpha’ phase to find and test ways to meet user needs. Next comes the ‘Beta’ phase, where we release the solution to the public and intensively test it. Lastly, there is a ‘Live’ phase, where the service is fully released and continues to be monitored and improved upon.
Between the Beta and Live phases, our team members step back from the service, and the partner team in the department continues the maintenance and development. We can help partners recruit their service team from both internal and external sources.
Before each phase begins, CDS and the partner sign a partnership agreement which outlines the goal and outcomes for the coming phase, how we’ll get there, and a commitment to get them done.
As you can see, there’s not a lot of detail and they don’t seem to have included anything about artificial intelligence as part of their operation. (I’ll come back to the government’s implementation of artificial intelligence and information technology later.)
Does the Treasury Board of Canada have charge of responsible AI use?
I think so but there are government departments/ministries that also have some responsibilities for AI and I haven’t seen any links back to the Treasury Board documentation.
The Treasury Board of Canada represent a key entity within the federal government. As an important cabinet committee and central agency, they play an important role in financial and personnel administration. Even though the Treasury Board plays a significant role in government decision making, the general public tends to know little about its operation and activities. [emphasis mine] The following article provides an introduction to the Treasury Board, with a focus on its history, responsibilities, organization, and key issues.
I haven’t read the entire document but the table of contents doesn’t include a heading for artificial intelligence and there wasn’t any mention of it in the opening comments.
But isn’t there a Chief Information Officer for Canada?
Herein lies a tale (I doubt I’ll ever get the real story) but the answer is a qualified ‘no’. The Chief Information Officer for Canada, Alex Benay (there is an AI aspect) stepped down in September 2019 to join a startup company according to an August 6, 2019 article by Mia Hunt for Global Government Forum,
Alex Benay has announced he will step down as Canada’s chief information officer next month to “take on new challenge” at tech start-up MindBridge.
“It is with mixed emotions that I am announcing my departure from the Government of Canada,” he said on Wednesday in a statement posted on social media, describing his time as CIO as “one heck of a ride”.
He said he is proud of the work the public service has accomplished in moving the national digital agenda forward. Among these achievements, he listed the adoption of public Cloud across government; delivering the “world’s first” ethical AI management framework; [emphasis mine] renewing decades-old policies to bring them into the digital age; and “solidifying Canada’s position as a global leader in open government”.
He also led the introduction of new digital standards in the workplace, and provided “a clear path for moving off” Canada’s failed Phoenix pay system. [emphasis mine]
Since September 2019, Mr. Benay has moved again according to a November 7, 2019 article by Meagan Simpson on the BetaKit,website (Note: Links have been removed),
Alex Benay, the former CIO [Chief Information Officer] of Canada, has left his role at Ottawa-based Mindbridge after a short few months stint.
The news came Thursday, when KPMG announced that Benay was joining the accounting and professional services organization as partner of digital and government solutions. Benay originally announced that he was joining Mindbridge in August, after spending almost two and a half years as the CIO for the Government of Canada.
Benay joined the AI startup as its chief client officer and, at the time, was set to officially take on the role on September 3rd. According to Benay’s LinkedIn, he joined Mindbridge in August, but if the September 3rd start date is correct, Benay would have only been at Mindbridge for around three months. The former CIO of Canada was meant to be responsible for Mindbridge’s global growth as the company looked to prepare for an IPO in 2021.
Benay told The Globe and Mail that his decision to leave Mindbridge was not a question of fit, or that he considered the move a mistake. He attributed his decision to leave to conversations with Mindbridge customer KPMG, over a period of three weeks. Benay told The Globe that he was drawn to the KPMG opportunity to lead its digital and government solutions practice, something that was more familiar to him given his previous role.
Mindbridge has not completely lost what was touted as a start hire, though, as Benay will be staying on as an advisor to the startup. “This isn’t a cutting the cord and moving on to something else completely,” Benay told The Globe. “It’s a win-win for everybody.”
Via Mr. Benay, I’ve re-introduced artificial intelligence and introduced the Phoenix Pay system and now I’m linking them to government implementation of information technology in a specific case and speculating about implementation of artificial intelligence algorithms in government.
Phoenix Pay System Debacle (things are looking up), a harbinger for responsible use of artificial intelligence?
I’m happy to hear that the situation where government employees had no certainty about their paycheques is becoming better. After the ‘new’ Phoenix Pay System was implemented in early 2016, government employees found they might get the correct amount on their paycheque or might find significantly less than they were entitled to or might find huge increases.
The instability alone would be distressing but adding to it with the inability to get the problem fixed must have been devastating. Almost five years later, the problems are being resolved and people are getting paid appropriately, more often.
The estimated cost for fixing the problems was, as I recall, over $1B; I think that was a little optimistic. James Bagnall’s July 28, 2020 article for the Ottawa Citizen provides more detail, although not about the current cost, and is the source of my measured optimism,
Something odd has happened to the Phoenix Pay file of late. After four years of spitting out errors at a furious rate, the federal government’s new pay system has gone quiet.
And no, it’s not because of the even larger drama written by the coronavirus. In fact, there’s been very real progress at Public Services and Procurement Canada [PSPC; emphasis mine], the department in charge of pay operations.
Since January 2018, the peak of the madness, the backlog of all pay transactions requiring action has dropped by about half to 230,000 as of late June. Many of these involve basic queries for information about promotions, overtime and rules. The part of the backlog involving money — too little or too much pay, incorrect deductions, pay not received — has shrunk by two-thirds to 125,000.
These are still very large numbers but the underlying story here is one of long-delayed hope. The government is processing the pay of more than 330,000 employees every two weeks while simultaneously fixing large batches of past mistakes.
While officials with two of the largest government unions — Public Service Alliance of Canada [PSAC] and the Professional Institute of the Public Service of Canada [PPSC] — disagree the pay system has worked out its kinks, they acknowledge it’s considerably better than it was. New pay transactions are being processed “with increased timeliness and accuracy,” the PSAC official noted.
Neither union is happy with the progress being made on historical mistakes. PIPSC president Debi Daviau told this newspaper that many of her nearly 60,000 members have been waiting for years to receive salary adjustments stemming from earlier promotions or transfers, to name two of the more prominent sources of pay errors.
Even so, the sharp improvement in Phoenix Pay’s performance will soon force the government to confront an interesting choice: Should it continue with plans to replace the system?
Treasury Board, the government’s employer, two years ago launched the process to do just that. Last March, SAP Canada — whose technology underpins the pay system still in use at Canada Revenue Agency — won a competition to run a pilot project. Government insiders believe SAP Canada is on track to build the full system starting sometime in 2023.
When Public Services set out the business case in 2009 for building Phoenix Pay, it noted the pay system would have to accommodate 150 collective agreements that contained thousands of business rules and applied to dozens of federal departments and agencies. The technical challenge has since intensified.
Under the original plan, Phoenix Pay was to save $70 million annually by eliminating 1,200 compensation advisors across government and centralizing a key part of the operation at the pay centre in Miramichi, N.B., where 550 would manage a more automated system.
Instead, the Phoenix Pay system currently employs about 2,300. This includes 1,600 at Miramichi and five regional pay offices, along with 350 each at a client contact centre (which deals with relatively minor pay issues) and client service bureau (which handles the more complex, longstanding pay errors). This has naturally driven up the average cost of managing each pay account — 55 per cent higher than the government’s former pay system according to last fall’s estimate by the Parliamentary Budget Officer.
… As the backlog shrinks, the need for regional pay offices and emergency staffing will diminish. Public Services is also working with a number of high-tech firms to develop ways of accurately automating employee pay using artificial intelligence [emphasis mine].
Given the Phoenix Pay System debacle, it might be nice to see a little information about how the government is planning to integrate more sophisticated algorithms (artificial intelligence) in their operations.
The blonde model or actress mentions that companies applying to Public Services and Procurement Canada for placement on the list must use AI responsibly. Her script does not include a definition or guidelines, which, as previously noted, as on the Treasury Board website.
Public Services and Procurement Canada (PSPC) is putting into operation the Artificial intelligence source list to facilitate the procurement of Canada’s requirements for Artificial intelligence (AI).
After research and consultation with industry, academia, and civil society, Canada identified 3 AI categories and business outcomes to inform this method of supply:
Insights and predictive modelling
PSPC is focused only on procuring AI. If there are guidelines on their website for its use, I did not find them.
I found one more government agency that might have some information about artificial intelligence and guidelines for its use, Shared Services Canada,
Shared Services Canada (SSC) delivers digital services to Government of Canada organizations. We provide modern, secure and reliable IT services so federal organizations can deliver digital programs and services that meet Canadians needs.
Since the Minister of Digital Government, Joyce Murray, is listed on the homepage, I was hopeful that I could find out more about AI and governance and whether or not the Canadian Digital Service was associated with this government ministry/agency. I was frustrated on both counts.
To sum up, there is no information that I could find after March 2019 about Canada, it’s government and plans for AI, especially responsible management/governance and AI on a Canadian government website although I have found guidelines, expectations, and consequences for non-compliance. (Should anyone know which government agency has up-to-date information on its responsible use of AI, please let me know in the Comments.
In 2017, the Government of Canada appointed CIFAR to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.
CIFAR works in close collaboration with Canada’s three national AI Institutes — Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto, as well as universities, hospitals and organizations across the country.
The objectives of the strategy are to:
Attract and retain world-class AI researchers by increasing the number of outstanding AI researchers and skilled graduates in Canada.
Foster a collaborative AI ecosystem by establishing interconnected nodes of scientific excellence in Canada’s three major centres for AI: Edmonton, Montreal, and Toronto.
Advance national AI initiatives by supporting a national research community on AI through training programs, workshops, and other collaborative opportunities.
Understand the societal implications of AI by developing global thought leadership on the economic, ethical, policy, and legal implications [emphasis mine] of advances in AI.
Responsible AI at CIFAR
You can find Responsible AI in a webspace devoted to what they have called, AI & Society. Here’s more from the homepage,
CIFAR is leading global conversations about AI’s impact on society.
The AI & Society program, one of the objectives of the CIFAR Pan-Canadian AI Strategy, develops global thought leadership on the economic, ethical, political, and legal implications of advances in AI. These dialogues deliver new ways of thinking about issues, and drive positive change in the development and deployment of responsible AI.
Canada was the first country in the world to announce a federally-funded national AI strategy, prompting many other nations to follow suit. CIFAR published two reports detailing the global landscape of AI strategies.
I skimmed through the second report and it seems more like a comparative study of various country’s AI strategies than a overview of responsible use of AI.
Final comments about Responsible AI in Canada and the new reports
I’m glad to see there’s interest in Responsible AI but based on my adventures searching the Canadian government websites and the Pan-Canadian AI Strategy webspace, I’m left feeling hungry for more.
I didn’t find any details about how AI is being integrated into government departments and for what uses. I’d like to know and I’d like to have some say about how it’s used and how the inevitable mistakes will be dealh with.
The great unwashed
What I’ve found is high minded, but, as far as I can tell, there’s absolutely no interest in talking to the ‘great unwashed’. Those of us who are not experts are being left out of these earlier stage conversations.
I’m sure we’ll be consulted at some point but it will be long past the time when are our opinions and insights could have impact and help us avoid the problems that experts tend not to see. What we’ll be left with is protest and anger on our part and, finally, grudging admissions and corrections of errors on the government’s part.
Let’s take this for an example. The Phoenix Pay System was implemented in its first phase on Feb. 24, 2016. As I recall, problems develop almost immediately. The second phase of implementation starts April 21, 2016. In May 2016 the government hires consultants to fix the problems. November 29, 2016 the government minister, Judy Foote, admits a mistake has been made. February 2017 the government hires consultants to establish what lessons they might learn. February 15, 2018 the pay problems backlog amounts to 633,000. Source: James Bagnall, Feb. 23, 2018 ‘timeline‘ for Ottawa Citizen
Do take a look at the timeline, there’s more to it than what I’ve written here and I’m sure there’s more to the Phoenix Pay System debacle than a failure to listen to warnings from those who would be directly affected. It’s fascinating though how often a failure to listen presages far deeper problems with a project.
The Canadian government, both a conservative and a liberal government, contributed to the Phoenix Debacle but it seems the gravest concern is with senior government bureaucrats. You might think things have changed since this recounting of the affair in a June 14, 2018 article by Michelle Zilio for the Globe and Mail,
The three public servants blamed by the Auditor-General for the Phoenix pay system problems were not fired for mismanagement of the massive technology project that botched the pay of tens of thousands of public servants for more than two years.
Marie Lemay, deputy minister for Public Services and Procurement Canada (PSPC), said two of the three Phoenix executives were shuffled out of their senior posts in pay administration and did not receive performance bonuses for their handling of the system. Those two employees still work for the department, she said. Ms. Lemay, who refused to identify the individuals, said the third Phoenix executive retired.
In a scathing report last month, Auditor-General Michael Ferguson blamed three “executives” – senior public servants at PSPC, which is responsible for Phoenix − for the pay system’s “incomprehensible failure.” [emphasis mine] He said the executives did not tell the then-deputy minister about the known problems with Phoenix, leading the department to launch the pay system despite clear warnings it was not ready.
Speaking to a parliamentary committee on Thursday, Ms. Lemay said the individuals did not act with “ill intent,” noting that the development and implementation of the Phoenix project were flawed. She encouraged critics to look at the “bigger picture” to learn from all of Phoenix’s failures.
Mr. Ferguson, whose office spoke with the three Phoenix executives as a part of its reporting, said the officials prioritized some aspects of the pay-system rollout, such as schedule and budget, over functionality. He said they also cancelled a pilot implementation project with one department that would have helped it detect problems indicating the system was not ready.
Mr. Ferguson’s report warned the Phoenix problems are indicative of “pervasive cultural problems” [emphasis mine] in the civil service, which he said is fearful of making mistakes, taking risks and conveying “hard truths.”
Speaking to the same parliamentary committee on Tuesday, Privy Council Clerk [emphasis mine] Michael Wernick challenged Mr. Ferguson’s assertions, saying his chapter on the federal government’s cultural issues is an “opinion piece” containing “sweeping generalizations.”
The Privy Council Clerk is the top level bureaucrat (and there is only one such clerk) in the civil/public service and I think his quotes are quite telling of “pervasive cultural problems.” There’s a new Privy Council Clerk but from what I can tell he was well trained by his predecessor.
Doe we really need senior government bureaucrats?
I now have an example of bureaucratic interference, specifically with the Global Public Health Information Network (GPHIN) where it would seem that not much has changed, from a December 26, 2020 article by Grant Robertson for the Globe & Mail,
When Canada unplugged support for its pandemic alert system [GPHIN] last year, it was a symptom of bigger problems inside the Public Health Agency. Experienced scientists were pushed aside, expertise was eroded, and internal warnings went unheeded, which hindered the department’s response to COVID-19
As a global pandemic began to take root in February, China held a series of backchannel conversations with Canada, lobbying the federal government to keep its borders open.
With the virus already taking a deadly toll in Asia, Heng Xiaojun, the Minister Counsellor for the Chinese embassy, requested a call with senior Transport Canada officials. Over the course of the conversation, the Chinese representatives communicated Beijing’s desire that flights between the two countries not be stopped because it was unnecessary.
“The Chinese position on the continuation of flights was reiterated,” say official notes taken from the call. “Mr. Heng conveyed that China is taking comprehensive measures to combat the coronavirus.”
Canadian officials seemed to agree, since no steps were taken to restrict or prohibit travel. To the federal government, China appeared to have the situation under control and the risk to Canada was low. Before ending the call, Mr. Heng thanked Ottawa for its “science and fact-based approach.”
It was a critical moment in the looming pandemic, but the Canadian government lacked the full picture, instead relying heavily on what Beijing was choosing to disclose to the World Health Organization (WHO). Ottawa’s ability to independently know what was going on in China – on the ground and inside hospitals – had been greatly diminished in recent years.
Canada once operated a robust pandemic early warning system and employed a public-health doctor based in China who could report back on emerging problems. But it had largely abandoned those international strategies over the past five years, and was no longer as plugged-in.
By late February , Ottawa seemed to be taking the official reports from China at their word, stating often in its own internal risk assessments that the threat to Canada remained low. But inside the Public Health Agency of Canada (PHAC), rank-and-file doctors and epidemiologists were growing increasingly alarmed at how the department and the government were responding.
“The team was outraged,” one public-health scientist told a colleague in early April, in an internal e-mail obtained by The Globe and Mail, criticizing the lack of urgency shown by Canada’s response during January, February and early March. “We knew this was going to be around for a long time, and it’s serious.”
China had locked down cities and restricted travel within its borders. Staff inside the Public Health Agency believed Beijing wasn’t disclosing the whole truth about the danger of the virus and how easily it was transmitted. “The agency was just too slow to respond,” the scientist said. “A sane person would know China was lying.”
It would later be revealed that China’s infection and mortality rates were played down in official records, along with key details about how the virus was spreading.
But the Public Health Agency, which was created after the 2003 SARS crisis to bolster the country against emerging disease threats, had been stripped of much of its capacity to gather outbreak intelligence and provide advance warning by the time the pandemic hit.
The Global Public Health Intelligence Network, an early warning system known as GPHIN that was once considered a cornerstone of Canada’s preparedness strategy, had been scaled back over the past several years, with resources shifted into projects that didn’t involve outbreak surveillance.
However, a series of documents obtained by The Globe during the past four months, from inside the department and through numerous Access to Information requests, show the problems that weakened Canada’s pandemic readiness run deeper than originally thought. Pleas from the international health community for Canada to take outbreak detection and surveillance much more seriously were ignored by mid-level managers [emphasis mine] inside the department. A new federal pandemic preparedness plan – key to gauging the country’s readiness for an emergency – was never fully tested. And on the global stage, the agency stopped sending experts [emphasis mine] to international meetings on pandemic preparedness, instead choosing senior civil servants with little or no public-health background [emphasis mine] to represent Canada at high-level talks, The Globe found.
The curtailing of GPHIN and allegations that scientists had become marginalized within the Public Health Agency, detailed in a Globe investigation this past July , are now the subject of two federal probes – an examination by the Auditor-General of Canada and an independent federal review, ordered by the Minister of Health.
Those processes will undoubtedly reshape GPHIN and may well lead to an overhaul of how the agency functions in some areas. The first steps will be identifying and fixing what went wrong. With the country now topping 535,000 cases of COVID-19 and more than 14,700 dead, there will be lessons learned from the pandemic.
Prime Minister Justin Trudeau has said he is unsure what role added intelligence [emphasis mine] could have played in the government’s pandemic response, though he regrets not bolstering Canada’s critical supplies of personal protective equipment sooner. But providing the intelligence to make those decisions early is exactly what GPHIN was created to do – and did in previous outbreaks.
Epidemiologists have described in detail to The Globe how vital it is to move quickly and decisively in a pandemic. Acting sooner, even by a few days or weeks in the early going, and throughout, can have an exponential impact on an outbreak, including deaths. Countries such as South Korea, Australia and New Zealand, which have fared much better than Canada, appear to have acted faster in key tactical areas, some using early warning information they gathered. As Canada prepares itself in the wake of COVID-19 for the next major health threat, building back a better system becomes paramount.
If you have time, do take a look at Robertson’s December 26, 2020 article and the July 2020 Globe investigation. As both articles make clear, senior bureaucrats whose chief attribute seems to have been longevity took over, reallocated resources, drove out experts, and crippled the few remaining experts in the system with a series of bureaucratic demands while taking trips to attend meetings (in desirable locations) for which they had no significant or useful input.
The Phoenix and GPHIN debacles bear a resemblance in that senior bureaucrats took over and in a state of blissful ignorance made a series of disastrous decisions bolstered by politicians who seem to neither understand nor care much about the outcomes.
If you think I’m being harsh watch Canadian Broadcasting Corporation (CBC) reporter Rosemary Barton interview Prime Minister Trudeau for a 2020 year-end interview, Note: There are some commercials. Then, pay special attention to the Trudeau’s answer to the first question,
Responsible AI, eh?
Based on the massive mishandling of the Phoenix Pay System implementation where top bureaucrats did not follow basic and well established information services procedures and the Global Public Health Information Network mismanagement by top level bureaucrats, I’m not sure I have a lot of confidence in any Canadian government claims about a responsible approach to using artificial intelligence.
Unfortunately, it doesn’t matter as implementation is most likely already taking place here in Canada.
Enough with the pessimism. I feel it’s necessary to end this on a mildly positive note. Hurray to the government employees who worked through the Phoenix Pay System debacle, the current and former GPHIN experts who continued to sound warnings, and all those people striving to make true the principles of ‘Peace, Order, and Good Government’, the bedrock principles of the Canadian Parliament.
A lot of mistakes have been made but we also do make a lot of good decisions.
The Wilson Center (also known as the Woodrow Wilson International Center for Scholars) in Washington, DC is hosting a live webcast tomorrow on Dec. 3, 2020 and a call for applications for an internship (deadline; Dec. 18, 2020) and all of it concerns artificial intelligence (AI).
Assessing the AI Agenda: a Dec. 3, 2020 event
This looks like there could be some very interesting discussion about policy and AI, which could be applicable to other countries, as well as, the US. From a Dec. 2, 2020 Wilson Center announcements (received via email),
Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress
Thursday Dec. 3, 2020 11:00am – 12:30pm ET
Artificial intelligence (AI) technologies occupy a growing share of the legislative agenda and pose a number of policy opportunities and challenges. Please join The Wilson Center’s Science and Technology Innovation Program (STIP) for a conversation with Senate and House staff from the AI Caucuses, as they discuss current policy proposals on artificial intelligence and what to expect — including oversight measures–in the next Congress. The public event will take place on Thursday, December 3  from 11am to 12:30pm EDT, and will be hosted virtually on the Wilson Center’s website. RSVP today.
Sam Mulopulos, Legislative Assistant, Sen. Rob Portman (R-OH)
Sean Duggan, Military Legislative Assistant, Sen. Martin Heinrich (D-NM)
Dahlia Sokolov, Staff Director, Subcommittee on Research and Technology, House Committee on Science, Space, and Technology
Mike Richards, Deputy Chief of Staff, Rep. Pete Olson (R-TX)
Meg King, Director, Science and Technology Innovation Program, The Wilson Center
We hope you will join us for this critical conversation. To watch, please RSVP and bookmark the webpage. Tune in at the start of the event (you may need to refresh once the event begins) on December 3. Questions about this event can be directed to the Science and Technology Program through email at firstname.lastname@example.org or Twitter @WilsonSTIP using the hashtag #AICaucus.
Wilson Center’s AI Lab
This initiative brings to mind some of the science programmes that the UK government hosts for the members of Parliament. From the Wilson Center’s Artificial Intelligence Lab webpage,
Artificial Intelligence issues occupy a growing share of the Legislative and Executive Branch agendas; every day, Congressional aides advise their Members and Executive Branch staff encounter policy challenges pertaining to the transformative set of technologies collectively known as artificial intelligence. It is critically important that both lawmakers and government officials be well-versed in the complex subjects at hand.
What the Congressional and Executive Branch Labs Offer
Similar to the Wilson Center’s other technology training programs (e.g. the Congressional Cybersecurity Lab and the Foreign Policy Fellowship Program), the core of the Lab is a six-week seminar series that introduces participants to foundational topics in AI: what is machine learning; how do neural networks work; what are the current and future applications of autonomous intelligent systems; who are currently the main players in AI; and what will AI mean for the nation’s national security. Each seminar is led by top technologists and scholars drawn from the private, public, and non-profit sectors and a critical component of the Lab is an interactive exercise, in which participants are given an opportunity to take a hands-on role on computers to work through some of the major questions surrounding artificial intelligence. Due to COVID-19, these sessions are offered virtually. When health guidance permits, these sessions will return in-person at the Wilson Center.
Who Should Apply
The Wilson Center invites mid- to senior-level Congressional and Executive Branch staff to participate in the Lab; the program is also open to exceptional rising leaders with a keen interest in AI. Applicants should possess a strong understanding of the legislative or Executive Branch governing process and aspire to a career shaping national security policy.
Side trip: Science Meets (Canadian) Parliament
Briefly, here’s a bit about a programme in Canada, ‘Science Meets Parliament’ from the Canadian Science Policy Centre (CSPC); a not-for-profit, and the Canadian Office of the Chief Science Advisor (OCSA); a position with the Canadian federal government. Here’s a description of the programme from the Science Meets Parliament application webpage,
The objective of this initiative is to strengthen the connections between Canada’s scientific and political communities, enable a two-way dialogue, and promote mutual understanding. This initiative aims to help scientists become familiar with policy making at the political level, and for parliamentarians to explore using scientific evidence in policy making. [emphases mine] This initiative is not meant to be an advocacy exercise, and will not include any discussion of science funding or other forms of advocacy.
The Science Meets Parliament model is adapted from the successful Australian program held annually since 1999. Similar initiatives exist in the EU, the UK and Spain.
CSPC’s program aims to benefit the parliamentarians, the scientific community and, indirectly, the Canadian public.
This seems to be a training programme designed to teach scientists how to influence policy and to teach politicians to base their decisions on scientific evidence or, perhaps, lean on scientific experts that they met in ‘Science Meets Parliament’?
I hope they add some critical thinking to this programme so that politicians can make assessments of the advice they’re being given. Scientists have their blind spots too.
CSPC and OCSA are pleased to offer this program in 2021 to help strengthen the connection between the science and policy communities. The program provides an excellent opportunity for researchers to learn about the inclusion of scientific evidence in policy making in Parliament.
You can find out more about benefits, eligibility, etc. on the application page.
Paid Graduate Research Internship: AI & Facial Recognition
Getting back to the Wilson Center, there’s this opportunity (from a Dec. 1, 2020 notice received by email),
New policy is on the horizon for facial recognition technologies (FRT). Many current proposals, including The Facial Recognition and Biometric Technology Moratorium Act of 2020 and The Ethical Use of Artificial Intelligence Act, either target the use of FRT in areas such as criminal justice or propose general moratoria until guidelines can be put in place. But these approaches are limited by their focus on negative impacts. Effective planning requires a proactive approach that considers broader opportunities as well as limitations and includes consumers, along with federal, state and local government uses.
More research is required to get us there. The Wilson Center seeks to better understand a wide range of opportunities and limitations, with a focus on one critically underrepresented group: consumers. The Science and Technology Innovation Program (STIP) is seeking an intern for Spring 2021 to support a new research project on understanding FRT from the consumer perspective.
A successful candidate will:
Have a demonstrated track record of work on policy and ethical issues related to Artificial Intelligence (AI) generally, Facial Recognition specifically, or other emerging technologies.
Be able to work remotely.
Be enrolled in a degree program, recently graduated (within the last year) and/or have been accepted to enter an advanced degree program within the next year.
Interested applicants should submit:
Cover letter explaining your general interest in STIP and specific interest in this topic, including dates and availability.
CV / Resume
Two brief writing samples (formal and/or informal), ideally demonstrating your work in science and technology research.
Applications are due Friday, December 18th . Please email all application materials as a single PDF to Erin Rohn, email@example.com. Questions on this role can be directed to Anne Bowser, firstname.lastname@example.org.
Intelligence Squared (IQ2US) was featured here in a January 18, 2019 posting when the organization hosted a ‘de-extinction’ (or ‘resurrection’) biology debate. I was quite impressed with the quality of the arguments, pro and con (for and against) and the civility with which the participants conducted themselves. Fingers crossed their upcoming Nov. 6, 2020 debate proves as satisfying.
It should be noted that Bloomberg TV is co-hosting this event with Intelligence Squared (IQ2US) and IBM is sponsoring it.
Here’s more about the debate on the motion: A U.S.-China Space Race Is Good for Humanity, from an Oct. 26, 2020 Shore Fire announcement (received via email),
Next Friday evening [Nov. 6, 2020] at 7:00 pm ET, the nonprofit debate series Intelligence Squared U.S. will hold a live debate on the motion “A U.S.-China Space Race Is Good for Humanity.”
Two of their debaters have released statements commenting on today’s news [emphasis mine; I have included information about the Oct. 26, 2020 news after this event information] out of NASA. One, Bidushi Bhattacharya, is a twenty-year veteran of NASA. The other, Avi Loeb, is one of the most prominent scientists working on space today.
… they will be debating for the motion “A U.S.-China Space Race Is Good for Humanity” with Intelligence Squared U.S. … . The debate will be viewable on Bloomberg TV’s new show ‘That’s Debatable’. Their opponents are Michio Kaku and Rajeswari Pillai Rajagopalan.
AVI LOEB STATEMENT:
“It was already known from previous studies that there is water ice on the lunar surface. But the new study identified that it is more abundant and exists all over the Moon. Interestingly, a month ago we published a paper with my former postdoc, Manasvi Lingam, arguing that liquid water may exist deep under the surface of the Moon and support sub-surface life.
“The existence of significant amounts of water on the lunar surface can be helpful for establishing a sustainable base there in the context of NASA’s Artemis program with its international partners. This will be the first step in advancing humanity to more distant destinations, such as Mars and beyond. There is no doubt that our future lies in space, not only for national security and commercial benefits but mainly for scientific exploration aimed at opening new horizons to our civilization. Earlier in October , eight countries signed the Artemis Accords , a set of international agreements drawn up by the US concerning future exploration of the Moon and the use of its resources. The Accords recognize that exploration of the Moon should be for peaceful purposes.
“In analogy with the scientific exploration conducted in the South Pole, it would be particularly interesting to search for life under the surface of the Moon once we establish a scientific base there.”
BIDUSHI BHATTACHARYA STATEMENT
“Today’s [Oct. 26, 2020] announcement has huge implications for the commercial development space sector. Private companies and startups now have a new product development opportunity. I can see a path for leveraging today’s off-planet capabilities to develop AI-based robotics to provide water extraction services for NASA, so that the agency can continue to focus on R&D.”
Theoretical Physicist & Professor
Abraham (Avi) Loeb is a theoretical physicist, author, and Harvard professor. He was the longest-serving chair of Harvard’s astronomy department (for nine years) and is an elected member of the American Academy of Arts and Sciences, the American Physical Society, and the International Academy of Astronautics. Loeb is a member of the President’s Council of Advisors on Science and Technology at the White House and, in 2012, TIME magazine selected Loeb as one of the 25 most influential people in space.
Bidushi Bhattacharya: Rocket Scientist & Space Entrepreneur
Bidushi Bhattacharya is a rocket scientist and entrepreneur. After two decades with NASA working on projects including the Hubble Space Telescope and Galileo probe to Jupiter, Bhattacharya founded Astropreneurs HUB, Southeast Asias first space technology incubator. She currently serves on the Global Entrepreneurship Network Space Advisory Board and is the CEO of Bhattacharya Space Enterprises, a Singaporean startup dedicated to space-related education and training.
They found water (rather than the ice they had found before) on the moon and announced it on Oct. 26, 2020. To be more specific, they found the water in a crater named after a Jesuit priest, Christopher Clavius, who was also an astronomer and a mathematician. Given that piece of information it’s perhaps not that surprising that my cursory search yielded (near the top of the list) an Oct. 26, 2020 article about the discovery, Clavius, and the Jesuits’ interest in the stars by Molly Cahill for America Magazine The Jesuit Review (Note: Links have been removed),
On Oct. 26 , NASA’s Stratospheric Observatory for Infrared Astronomy, or SOFIA, announced the discovery of water on the moon. The water was discovered on the moon’s sunlit surface, which “indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places,” according to a press release.
His [Christopher Clavius] observance in 1560 of a total solar eclipse as a student inspired his life’s work: astronomy. Clavius is known for his work on refining and modifying the modern Gregorian calendar, and as Billy Critchley-Menor, S.J., wrote in America, Clavius was even called the “Euclid of the 16th century” before his death in 1612. He was one of the first mathematicians in the West to popularize the use of the decimal point, and his contributions to astronomy influenced Galileo, even though Clavius himself assented to a geocentric solar system, believing the heavens rotated around the Earth.
On Friday, November 6  at 7:00 PM ET Bloomberg Television will present the second episode of the new limited series “That’s Debatable,” presented in partnership with Intelligence Squared U.S. and sponsored exclusively by IBM, with an episode debating the motion “A U.S.-China Space Race Is Good for Humanity.” China is ramping up its national space industry with huge investments in next-generation technologies that promise to transform military, economic, and political realities. Could the U.S.-China space race drive innovation, rally public support for science and discovery, and launch humans into the next generation? Or would this competition catalyze an expensive global arms race, militarize space for decades to come, and destroy any hope of international peace and cohesion in the future?
Arguing in favor of the motion “A U.S.-China Space Race Is Good for Humanity” are Harvard physicist and member of the President’s Council of Advisors on Science and Technology at the White House Avi Loeb and rocket scientist Bidushi Bhattacharya, who spent two decades with NASA working on the Hubble Space Telescope and Galileo probe. Arguing against the motion are theoretical physicist Michio Kaku, a co-founder of String Field Theory, and nuclear weapons and space policy expert Rajeswari Pillai Rajagopalan.
Filmed in front of a live virtual audience, “That’s Debatable” will be conducted in the traditional Oxford-style format with two teams of two subject matter experts debating over four rounds, moderated by veteran Intelligence Squared U.S. moderator John Donvan. The live virtual audience will vote via mobile for or against the motion to determine the winner, to be announced at the conclusion of the program.
“That’s Debatable” also presents some of the first AI-aided debates, designed to demonstrate how AI can be used to bring a larger, more diverse range of voices and opinions to the public square. …
During the debate, IBM Watson plans to use Key Point Analysis, a new capability in Natural Language Processing (NLP) developed by the same IBM Research team that created Project Debater, which is designed to analyze viewer submitted arguments [deadline was Oct. 18, 2020] and provide insight into the global public opinion on each episode’s debate topic.
… [Note: The BIOS for those ‘arguing for the motion’ is in the Oct. 26, 2020 announcement excerpted near the beginning of this post]
Michio Kaku is one of the most widely recognized figures in science. He is a theoretical physicist, international bestselling author, and co-founder of String Field Theory. His most recent book, “Future of Humanity,” projects the future of the space program centuries into the future. Kaku is a professor at the City University of New York.
Rajeswari Pillai Rajagopalan: Nuclear Weapons & Space Policy Expert
Rajeswari Pillai Rajagopalan is a distinguished fellow and head of the Nuclear and Space Policy Initiative at the Observer Research Foundation, one of India’s leading think tanks. Rajagopalan also recently served as a technical advisor to the United Nations Group of Governmental Experts on Prevention of Arms Race in Outer Space. She is the author of “The Dragon’s Fire: Chinese Military Strategy and Its Implications for Asia.”
About Bloomberg Media:
Bloomberg Media is a leading, global, multi-platform brand that provides decision-makers with timely news, analysis and intelligence on business, finance, technology, climate change, politics and more. Powered by a newsroom of over 2,700 journalists and analysts, it reaches influential audiences worldwide across every platform including digital, social, TV, radio, print and live events. Bloomberg Media is a division of Bloomberg LP. Visit BloombergMedia.com for more information.
About Intelligence Squared U.S.:
A non-partisan, non-profit organization, Intelligence Squared U.S. was founded to address a fundamental problem in America: the extreme polarization of our nation and our politics. Their mission is to restore critical thinking, facts, reason, and civility to American public discourse. The award-winning debate series reaches millions of viewers and listeners through multi-platform distribution, including public radio, podcasts, live streaming, newsletters, interactive digital content, and on-demand apps including Roku and Apple TV. With over 180 debates and counting, Intelligence Squared U.S. has encouraged the public to “think twice” on a wide range of provocative topics. Author and ABC News correspondent John Donvan has moderated IQ2US since 2008.
About IBM Watson:
Watson is IBM’s AI technology for business, helping organizations to better predict and shape future outcomes, automate complex processes, and optimize employees’ time. Watson has evolved from an IBM Research project, to experimentation, to a scaled set of products that run anywhere. With more than 30,000 client engagements, Watson is being applied by leading global brands across a variety of industries to transform how people work. To learn more, visit: https://www.ibm.com/watson.
To learn more about Natural Language Processing and how new capabilities like Key Point Analysis are designed to analyze and generate insights from thousands of arguments on any topic, visit: https://www.ibm.com/watson/natural-language-processing.
The ArtSci Salon is getting quite active these days. Here’s the latest from an Oct. 22, 2020 ArtSci Salon announcement (received via email), which can also be viewed on their Kaleidoscope event page,
Performing togetherness in empty spaces
An experimental collaboration between the ArtSci Salon, the Digital Dramaturgy Lab_squared/ DDL2 and Sensorium: Centre for Digital Arts and Technology, York University (Toronto, Ontario, Canada)
Tuesday, October 27, 2020
7:30 pm [EDT]
Join our evening of live-streamed, multi-media performances, following a kaleidoscopic dramaturgy of complexity discourses as inspired by computational complexity theory gatherings.
We are presenting installations, site-specific artistic interventions and media experiments, featuring networked audio and video, dance and performances as we repopulate spaces – The Fields Institute and surroundings – forced to lie empty due to the pandemic. Respecting physical distance and new sanitation and safety rules can be challenging, but it can also open up new ideas and opportunities.
NOTE: DDL2 contributions to this event are sourced or inspired by their recent kaleidoscopic performance “Rattling the the Curve – Paradoxical ECODATA performances of A/I (artistic intelligence), and facial recognition of humans and trees
Virtual space/live streaming concept and design: DDL2 Antje Budde, Karyn McCallum and Don Sinclair
Virtual space and streaming pilot: Don Sinclair
Here are specific programme details (from the announcement),
Signing the Virus – Video (2 min.) Collaborators: DDL2 Antje Budde, Felipe Cervera, Grace Whiskin
Niimi II – – Performance and outdoor video projection (15 min.) (Nimii means in Anishinaabemowin: s/he dances) Collaborators: DDL2 Candy Blair, Antje Budde, Jill Carter, Lars Crosby, Nina Czegledy, Dave Kemp
Oracle Jane (Scene 2) – A partial playreading on the politics of AI (30 min.) Playwright: DDL2 Oracle Collaborators: DDL2 Antje Budde, Frans Robinow, George Bwannika Seremba, Amy Wong and AI ethics consultant Vicki Zhang
Vriksha/Tree – Dance video and outdoor projection (8 min.) Collaborators: DDL2 Antje Budde, Lars Crosby, Astad Deboo, Dave Kemp, Amit Kumar
Facial Recognition – Performing a Plate Camera from a Distance (3 min.) Collaborators: DDL2 Antje Budde, Jill Carter, Felipe Cervera, Nina Czegledy, Karyn McCallum, Lars Crosby, Martin Kulinna, Montgomery C. Martin, George Bwanika Seremba, Don Sinclair, Heike Sommer
Cutting Edge – Growing Data (6 min.) DDL2 A performance by Antje Budde
“void * ambience” – Architectural and instrumental acoustics, projection mapping Concept: Sensorium: The Centre for Digital Art and Technology, York University Collaborators: Michael Palumbo, Ilze Briede [Kavi], Debashis Sinha, Joel Ong
This performance is part of a series (from the announcement),
These three performances are part of Boundary-Crossings: Multiscalar Entanglements in Art, Science and Society, a public Outreach program supported by the Fiends [sic] Institute for Research in Mathematical Science. Boundary Crossings is a series exploring how the notion of boundaries can be transcended and dissolved in the arts and the humanities, the biological and the mathematical sciences, as well as human geography and political economy. Boundaries are used to establish delimitations among disciplines; to discriminate between the human and the non-human (body and technologies, body and bacteria); and to indicate physical and/or artificial boundaries, separating geographical areas and nation states. Our goal is to cross these boundaries by proposing new narratives to show how the distinctions, and the barriers that science, technology, society and the state have created can in fact be re-interpreted as porous and woven together.
This event is curated and produced by ArtSci Salon; Digital Dramaturgy Lab_squared/ DDL2; Sensorium: Centre for Digital Arts and Technology, York University; and Ryerson University; it is supported by The Fields Institute for Research in Mathematical Sciences
Finally, the announcement includes biographical information about all of the ‘boundary-crossers’,
Candy Blair (Tkaron:to/Toronto) Candy Blair/Otsίkh:èta (they/them) is a mixed First Nations/European, 2-spirit interdisciplinary visual and performing artist from Tio’tía:ke – where the group split (“Montreal”) in Québec.
While continuing their work as an artist they also finished their Creative Arts, Literature, and Languages program at Marianopolis College (cégep), their 1st year in the Theatre program at York University, and their 3rd year Acting Conservatory Program at the Centre For Indigenous Theatre in Tsí Tkaròn:to – Where the trees stand in water (Toronto”).
Some of Candy’s noteable performances are Jill Carter’s Encounters at the Edge of the Woods, exploring a range of issues with colonization; Ange Loft’s project Talking Treaties, discussing the treaties of the “Toronto” purchase; Cheri Maracle’s The Story of Six Nations, exploring Six Nation’s origin story through dance/combat choreography, and several other performances, exploring various topics around Indigenous language, land, and cultural restoration through various mediums such as dance, modelling, painting, theatre, directing, song, etc. As an activist and soon to be entrepreneur, Candy also enjoys teaching workshops around promoting Indigenous resurgence such as Indigenous hand drumming, food sovereignty, beading, medicine knowledge, etc..
Working with their collectives like Weave and Mend, they were responsible for the design, land purification, and installation process of the four medicine plots and a community space with their 3 other members. Candy aspires to continue exploring ways of decolonization through healthy traditional practices from their mixed background and the arts in the hopes of eventually supporting Indigenous relations worldwide.
Antje Budde Antje Budde is a conceptual, queer-feminist, interdisciplinary experimental scholar-artist and an Associate Professor of Theatre Studies, Cultural Communication and Modern Chinese Studies at the Centre for Drama, Theatre and Performance Studies, University of Toronto. Antje has created multi-disciplinary artistic works in Germany, China and Canada and works tri-lingually in German, English and Mandarin. She is the founder of a number of queerly feminist performing art projects including most recently the (DDL)2 or (Digital Dramaturgy Lab)Squared – a platform for experimental explorations of digital culture, creative labor, integration of arts and science, and technology in performance. She is interested in the intersections of natural sciences, the arts, engineering and computer science.
Roberta Buiani Roberta Buiani (MA; PhD York University) is the Artistic Director of the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences (Toronto). Her artistic work has travelled to art festivals (Transmediale; Hemispheric Institute Encuentro; Brazil), community centres and galleries (the Free Gallery Toronto; Immigrant Movement International, Queens, Myseum of Toronto), and science institutions (RPI; the Fields Institute). Her writing has appeared on Space and Culture, Cultural Studies and The Canadian Journal of Communication_among others. With the ArtSci Salon she has launched a series of experiments in “squatting academia”, by re-populating abandoned spaces and cabinets across university campuses with SciArt installations.
Currently, she is a research associate at the Centre for Feminist Research and a Scholar in Residence at Sensorium: Centre for Digital Arts and Technology at York University [Toronto, Ontario, Canada].
Jill Carter (Tkaron:to/ Toronto) Jill (Anishinaabe/Ashkenazi) is a theatre practitioner and researcher, currently cross appointed to the Centre for Drama, Theatre and Performance Studies; the Transitional Year Programme; and Indigenous Studies at the University of Toronto. She works with many members of Tkaron:to’s Indigenous theatre community to support the development of new works and to disseminate artistic objectives, process, and outcomes through community- driven research projects. Her scholarly research, creative projects, and activism are built upon ongoing relationships with Indigenous Elders, Artists and Activists, positioning her as witness to, participant in, and disseminator of oral histories that speak to the application of Indigenous aesthetic principles and traditional knowledge systems to contemporary performance.The research questions she pursues revolve around the mechanics of story creation, the processes of delivery and the manufacture of affect.
More recently, she has concentrated upon Indigenous pedagogical models for the rehearsal studio and the lecture hall; the application of Indigenous [insurgent] research methods within performance studies; the politics of land acknowledgements; and land – based dramaturgies/activations/interventions.
Jill also works as a researcher and tour guide with First Story Toronto; facilitates Land Acknowledgement, Devising, and Land-based Dramaturgy Workshops for theatre makers in this city; and performs with the Talking Treaties Collective (Jumblies Theatre, Toronto).
In September 2019, Jill directed Encounters at the Edge of the Woods. This was a devised show, featuring Indigenous and Settler voices, and it opened Hart House Theatre’s 100th season; it is the first instance of Indigenous presence on Hart House Theatre’s stage in its 100 years of existence as the cradle for Canadian theatre.
Nina Czegledy (Toronto) artist, curator, educator, works internationally on collaborative art, science & technology projects. The changing perception of the human body and its environment as well as paradigm shifts in the arts inform her projects. She has exhibited and published widely, won awards for her artwork and has initiated, lead and participated in workshops, forums and festivals worldwide at international events.
Astad Deboo (Mumbai, India) Astad Deboo is a contemporary dancer and choreographer who employs his training in Indian classical dance forms of Kathak as well as Kathakali to create a dance form that is unique to him. He has become a pioneer of modern dance in India. Astad describes his style as “contemporary in vocabulary and traditional in restraints.” Throughout his long and illustrious career, he has worked with various prominent performers such as Pina Bausch, Alis on Becker Chase and Pink Floyd and performed in many parts of the world. He has been awarded the Sangeet Natak Akademi Award (1996) and Padma Shri (2007), awarded by the Government of India. In January 2005 along with 12 young women with hearing impairment supported by the Astad Deboo Dance Foundation, he performed at the 20th Annual Deaf Olympics at Melbourne, Australia. Astad has a long record of working with disadvantaged youth.
Ilze Briede [Kavi] Ilze Briede [artist name: Kavi] is a Latvian/Canadian artist and researcher with broad and diverse interests. Her artistic practice, a hybrid of video, image and object making, investigates the phenomenon of perception and the constraints and boundaries between the senses and knowing. Kavi is currently pursuing a PhD degree in Digital Media at York University with a research focus on computational creativity and generative art. She sees computer-generated systems and algorithms as a potentiality for co-creation and collaboration between human and machine. Kavi has previously worked and exhibited with Fashion Art Toronto, Kensington Market Art Fair, Toronto Burlesque Festival, Nuit Blanche, Sidewalk Toronto and the Toronto Symphony Orchestra.
Dave Kemp Dave Kemp is a visual artist whose practice looks at the intersections and interactions between art, science and technology: particularly at how these fields shape our perception and understanding of the world. His artworks have been exhibited widely at venues such as at the McIntosh Gallery, The Agnes Etherington Art Centre, Art Gallery of Mississauga, The Ontario Science Centre, York Quay Gallery, Interaccess, Modern Fuel Artist-Run Centre, and as part of the Switch video festival in Nenagh, Ireland. His works are also included in the permanent collections of the Agnes Etherington Art Centre and the Canada Council Art Bank.
Stephen Morris Stephen Morris is Professor of experimental non-linear Physics in the faculty of Physics at the University of Toronto. He is the scientific Director of the ArtSci salon at the Fields Institute for Research in Mathematical Sciences. He often collaborates with artists and has himself performed and produced art involving his own scientific instruments and experiments in non-linear physics and pattern formation
Michael Palumbo Michael Palumbo (MA, BFA) is an electroacoustic music improviser, coder, and researcher. His PhD research spans distributed creativity and version control systems, and is expressed through “git show”, a distributed electroacoustic music composition and design experiment, and “Mischmasch”, a collaborative modular synthesizer in virtual reality. He studies with Dr. Doug Van Nort as a researcher in the Distributed Performance and Sensorial Immersion Lab, and Dr. Graham Wakefield at the Alice Lab for Computational Worldmaking. His works have been presented internationally, including at ISEA, AES, NIME, Expo ’74, TIES, and the Network Music Festival. He performs regularly with a modular synthesizer, runs the Exit Points electroacoustic improvisation series, and is an enthusiastic gardener and yoga practitioner.
Joel Ong (PhD. Digital Arts and Experimental Media (DXARTS, University of Washington) Joel Ong is a media artist whose works connect scientific and artistic approaches to the environment, particularly with respect to sound and physical space. Professor Ong’s work explores the way objects and spaces can function as repositories of ‘frozen sound’, and in elucidating these, he is interested in creating what systems theorist Jack Burnham (1968) refers to as “art (that) does not reside in material entities, but in relations between people and between people and the components of their environment”.
A serial collaborator, Professor Ong is invested in the broader scope of Art-Science collaborations and is engaged constantly in the discourses and processes that facilitate viewing these two polemical disciplines on similar ground. His graduate interdisciplinary work in nanotechnology and sound was conducted at SymbioticA, the Center of Excellence for Biological Arts at the University of Western Australia and supervised by BioArt pioneers and TCA (The Tissue Culture and Art Project) artists Dr Ionat Zurr and Oron Catts.
George Bwanika Seremba George Bwanika Seremba,is an actor, playwright and scholar. He was born in Uganda. George holds an M. Phil, and a Ph.D. in Theatre Studies, from Trinity College Dublin. In 1980, having barely survived a botched execution by the Military Intelligence, he fled into exile, resettling in Canada (1983). He has performed in numerous plays including in his own, “Come Good Rain”, which was awarded a Dora award (1993). In addition, he published a number of edited play collections including “Beyond the pale: dramatic writing from First Nations writers & writers of colour” co-edited by Yvette Nolan, Betty Quan, George Bwanika Seremba. (1996).
George was nominated for the Irish Times’ Best Actor award in Dublin’s Calypso Theatre’s for his role in Athol Fugard’s “Master Harold and the boys”. In addition to theatre he performed in several movies and on television. His doctoral thesis (2008) entitled “Robert Serumaga and the Golden Age of Uganda’s Theatre (1968-1978): (Solipsism, Activism, Innovation)” will be published as a monograph by CSP (U.K) in 2021.
Don Sinclair (Toronto) Don is Associate Professor in the Department of Computational Arts at York University. His creative research areas include interactive performance, projections for dance, sound art, web and data art, cycling art, sustainability, and choral singing most often using code and programming. Don is particularly interested in processes of artistic creation that integrate digital creative coding-based practices with performance in dance and theatre. As well, he is an enthusiastic cyclist.
Debashis Sinha Driven by a deep commitment to the primacy of sound in creative expression, Debashis Sinha has realized projects in radiophonic art, music, sound art, audiovisual performance, theatre, dance, and music across Canada and internationally. Sound design and composition credits include numerous works for Peggy Baker Dance Projects and productions with Canada’s premiere theatre companies including The Stratford Festival, Soulpepper, Volcano Theatre, Young People’s Theatre, Project Humanity, The Theatre Centre, Nightwood Theatre, Why Not Theatre, MTC Warehouse and Necessary Angel. His live sound practice on the concert stage has led to appearances at MUTEK Montreal, MUTEK Japan, the Guelph Jazz Festival, the Banff Centre, The Music Gallery, and other venues. Sinha teaches sound design at York University and the National Theatre School, and is currently working on a multi-part audio/performance work incorporating machine learning and AI funded by the Canada Council for the Arts.
Vicki (Jingjing) Zhang (Toronto) Vicki Zhang is a faculty member at University of Toronto’s statistics department. She is the author of Uncalculated Risks (Canadian Scholar’s Press, 2014). She is also a playwright, whose plays have been produced or stage read in various festivals and venues in Canada including Toronto’s New Ideas Festival, Winnipeg’s FemFest, Hamilton Fringe Festival, Ergo Pink Fest, InspiraTO festival, Toronto’s Festival of Original Theatre (FOOT), Asper Center for Theatre and Film, Canadian Museum for Human Rights, Cultural Pluralism in the Arts Movement Ontario (CPAMO), and the Canadian Play Thing. She has also written essays and short fiction for Rookie Magazine and Thread.
If you can’t attend this Oct. 27, 2020 event, there’s still the Oct. 29, 2020 Boundary-Crossings event: Beauty Kit (see my Oct. 12, 2020 posting for more).
As for Kaleidoscopic Imaginations, you can access the Streaming Link On Oct. 27, 2020 at 7:30 pm EDT (4 pm PDT).