Tag Archives: Alberta Machine Intelligence Institute (Amii)

STEM (science, technology, engineering and math) brings life to the global hit television series “The Walking Dead” and a Canadian AI initiative for women and diversity

I stumbled across this June 8, 2022 AMC Networks news release in the last place I was expecting (i.e., a self-described global entertainment company’s website) to see a STEM (science, technology, engineering, and mathematics) announcement,

AMC NETWORKS CONTENT ROOM TEAMS WITH THE AD COUNCIL TO EMPOWER GIRLS IN STEM, FEATURING “THE WALKING DEAD”

AMC Networks Content Room and the Ad Council, a non-profit and leading producer of social impact campaigns for 80 years, announced today a series of new public service advertisements (PSAs) that will highlight the power of girls in STEM (science, technology, engineering and math) against the backdrop of the global hit series “The Walking Dead.”  In the spots, behind-the-scenes talent of the popular franchise, including Director Aisha Tyler, Costume Designer Vera Chow and Art Director Jasmine Garnet, showcase how STEM is used to bring the post-apocalyptic world of “The Walking Dead” to life on screen.  Created by AMC Networks Content Room, the PSAs are part of the Ad Council’s national She Can STEM campaign, which encourages girls, trans youth and non-binary youth around the country to get excited about and interested in STEM.

The new creative consists of TV spots and custom videos created specifically for TikTok and Instagram.  The spots also feature Gitanjali Rao, a 16-year-old scientist, inventor and activist, interviewing Tyler, Chow and Garnet discussing how they and their teams use STEM in the production of “The Walking Dead.”  Using before and after visuals, each piece highlights the unique and unexpected uses of STEM in the making of the series.  In addition to being part of the larger Ad Council campaign, the spots will be available on “The Walking Dead’s” social media platforms, including Facebook, Instagram, Twitter and YouTube pages, and across AMC Networks linear channels and digital platforms.

PSA:   https://youtu.be/V20HO-tUO18

Social: https://youtu.be/LnDwmZrx6lI

Said Kim Granito, EVP of AMC Networks Content Room: “We are thrilled to partner with the Ad Council to inspire young girls in STEM through the unexpected backdrop of ‘The Walking Dead.’  Over the last 11 years, this universe has been created by an array of insanely talented women that utilize STEM every day in their roles.  This campaign will broaden perceptions of STEM beyond the stereotypes of lab coats and beakers, and hopefully inspire the next generation of talented women in STEM.  Aisha Tyler, Vera Chow and Jasmine Garnet were a dream to work with and their shared enthusiasm for this mission is inspiring.”

“Careers in STEM are varied and can touch all aspects of our lives. We are proud to partner with AMC Networks Content Room on this latest work for the She Can STEM campaign. With it, we hope to inspire young girls, non-binary youth, and trans youth to recognize that their passion for STEM can impact countless industries – including the entertainment industry,” said Michelle Hillman, Chief Campaign Development Officer, Ad Council.

Women make up nearly half of the total college-educated workforce in the U.S., but they only constitute 27% of the STEM workforce, according to the U.S. Census Bureau. Research shows that many girls lose interest in STEM as early as middle school, and this path continues through high school and college, ultimately leading to an underrepresentation of women in STEM careers.  She Can STEM aims to dismantle the intimidating perceived barrier of STEM fields by showing girls, non-binary youth, and trans youth how fun, messy, diverse and accessible STEM can be, encouraging them to dive in, no matter where they are in their STEM journey.

Since the launch of She Can STEM in September 2018, the campaign has been supported by a variety of corporate, non-profit and media partners. The current funder of the campaign is IF/THEN, an initiative of Lyda Hill Philanthropies.  Non-profit partners include Black Girls Code, ChickTech, Girl Scouts of the USA, Girls Inc., Girls Who Code, National Center for Women & Information Technology, The New York Academy of Sciences and Society of Women Engineers.

About AMC Networks Inc.

AMC Networks (Nasdaq: AMCX) is a global entertainment company known for its popular and critically-acclaimed content. Its brands include targeted streaming services AMC+, Acorn TV, Shudder, Sundance Now, ALLBLK, and the newest addition to its targeted streaming portfolio, the anime-focused HIDIVE streaming service, in addition to AMC, BBC AMERICA (operated through a joint venture with BBC Studios), IFC, SundanceTV, WE tv and IFC Films. AMC Studios, the Company’s in-house studio, production and distribution operation, is behind some of the biggest titles and brands known to a global audience, including The Walking Dead, the Anne Rice catalog and the Agatha Christie library.  The Company also operates AMC Networks International, its international programming business, and 25/7 Media, its production services business.

About Content Room

Content Room is AMC Networks’ award-winning branded entertainment studio that collaborates with advertising partners to build brand stories and create bespoke experiences across an expanding range of digital, social, and linear platforms. Content Room enables brands to fully tap into the company’s premium programming, distinct IP, deep talent roster and filmmaking roots through an array of creative partnership opportunities— from premium branded content and integrations— to franchise and gaming extensions.

Content Room is also home to the award-winning digital content studio which produces dozens of original series annually, which expands popular AMC Networks scripted programming for both fans and advertising partners by leveraging the built-in massive series and talent fandoms.

The Ad Council
The Ad Council is where creativity and causes converge. The non-profit organization brings together the most creative minds in advertising, media, technology and marketing to address many of the nation’s most important causes. The Ad Council has created many of the most iconic campaigns in advertising history. Friends Don’t Let Friends Drive Drunk. Smokey Bear. Love Has No Labels.

The Ad Council’s innovative social good campaigns raise awareness, inspire action and save lives. To learn more, visit AdCouncil.org, follow the Ad Council’s communities on Facebook and Twitter, and view the creative on YouTube.

You can find the ‘She Can Stem’ Ad Council initiative here.

Canadian women and the AI4Good Lab

A June 9, 2022 posting on the Borealis AI website describes an artificial intelligence (AI) initiative designed to encourage women to enter the field,

The AI4Good Lab is one of those programs that creates exponential opportunities. As the leading Canadian AI-training initiative for women-identified STEM students, the lab helps encourage diversity in the field of AI. Participants work together to use AI to solve a social problem, delivering untold benefits to their local communities. And they work shoulder-to-shoulder with other leaders in the field of AI, building their networks and expanding the ecosystem.

At this year’s [2022] AI4Good Lab Industry Night, program partners – like Borealis AI, RBC [Royal Bank of Canada], DeepMind, Ivado and Google – had an opportunity to (virtually) meet the nearly 90  participants of this year’s program. Many of the program’s alumni were also in attendance. So, too, were representatives from CIFAR [Canadian Institute for Advanced Research], one of Canada’s leading global research organizations.

Industry participants – including Dr. Eirene Seiradaki, Director of Research Partnerships at Borealis AI, Carey Mende-Gibson, RBC’s Location Intelligence ambassador, and Lucy Liu, Director of Data Science at RBC – talked with attendees about their experiences in the AI industry, discussed career opportunities and explored various career paths that the participants could take in the industry. For the entire two hours, our three tables  and our virtually cozy couches were filled to capacity. It was only after the end of the event that we had the chance to exchange visits to the tables of our partners from CIFAR and AMII [Alberta Machine Intelligence Institute]. Eirene did not miss the opportunity to catch up with our good friend, Warren Johnston, and hear first-hand the news from AMII’s recent AI Week 2022.

Borealis AI is funded by the Royal Bank of Canada. Somebody wrote this for the homepage (presumably tongue in cheek),

All you can bank on.

The AI4Good Lab can be found here,

The AI4Good Lab is a 7-week program that equips women and people of marginalized genders with the skills to build their own machine learning projects. We emphasize mentorship and curiosity-driven learning to prepare our participants for a career in AI.

The program is designed to open doors for those who have historically been underrepresented in the AI industry. Together, we are building a more inclusive and diverse tech culture in Canada while inspiring the next generation of leaders to use AI as a tool for social good.

A most recent programme ran (May 3 – June 21, 2022) in Montréal, Toronto, and Edmonton.

There are a number of AI for Good initiatives including this one from the International Telecommunications Union (a United Nations Agency).

For the curious, I have a May 10, 2018 post “The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence” where I ‘examine’ RBC and its AI initiatives.

Coming soon: Responsible AI at the 35th Canadian Conference on Artificial Intelligence (AI) from 30 May to 3 June, 2022

35 years? How have I not stumbled on this conference before? Anyway, I’m glad to have the news (even if I’m late to the party), from the 35th Canadian Conference on Artificial Intelligence homepage,

The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.

The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.

The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.

You can view last year’s [2021] proceedings here: https://caiac.pubpub.org/ai2021.

The 2021 proceedings appear to be open access.

I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,

Keynote speaker: Julia Stoyanovich

New York University

“Building Data Equity Systems”

Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective.  I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.

Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU).  Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.  She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio.  Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic.  In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Panel on ethical implications of AI

Panelists

Luke Stark, Faculty of Information and Media Studies, Western University

Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta

Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.

Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.

Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR

Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI.  Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.

Tutorial on AI and the Law

Prof. Maura R. Grossman, University of Waterloo, and

Hon. Paul W. Grimm, United States District Court for the District of Maryland

AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.

How is AI being used in the legal industry today?

What has the legal industry’s reaction been to legal AI applications?

What are some of the biggest legal and ethical issues implicated by legal and other AI applications?

How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?

What considerations go into the trial judge’s decision?

What happens if the judge is not an expert in AI?  Do they recuse?

You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.

Getting back to the Responsible AI activities webpage,, there’s one more activity and this seems a little less focused on experts,

Virtual Meet and Greet on Responsible AI across Canada

Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.

It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.

The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).

Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.

The CAIAC is almost 50 years old (under various previous names) and has its website here.

*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.

A newsletter from the Pan-Canadian AI strategy folks

The AICan (Artificial Intelligence Canada) Bulletin is published by CIFAR (Canadian Institute For Advanced Research) and it is the official newsletter for the Pan-Canadian AI Strategy. This is a joint production from CIFAR, Amii (Alberta Machine Intelligence Institute), Mila (Quebec’s Artificial Intelligence research institute) and the Vector Institute for Artificial Intelligence (Toronto, Ontario).

For anyone curious about the Pan-Canadian Artificial Intelligence Strategy, first announced in the 2017 federal budget, I have a March 31, 2017 post which focuses heavily on the, then new, Vector Institute but it also contains information about the artificial intelligence scene in Canada at the time, which is at least in part still relevant today.

The AICan Bulletin October 2021 issue number 16 (The Energy and Environment Issue) is available for viewing here and includes these articles,

Equity, diversity and inclusion in AI climate change research

The effects of climate change significantly impact our most vulnerable populations. Canada CIFAR AI Chair David Rolnick (Mila) and Tami Vasanthakumaran (Girls Belong Here) share their insights and call to action for the AI research community.

Predicting the perfect storm

Canada CIFAR AI Chair Samira Kahou (Mila) is using AI to detect and predict extreme weather events to aid in disaster management and raise awareness for the climate crisis.

AI in biodiversity is crucial to our survival

Graham Taylor, a Canada CIFAR AI Chair at the Vector Institute, is using machine learning to build an inventory of life on Earth with DNA barcoding.

ISL Adapt uses ML to make water treatment cleaner & greener

Amii, the University of Alberta, and ISL Engineering explores how machine learning can make water treatment more environmentally friendly and cost-effective with the support of Amii Fellows and Canada CIFAR AI Chairs — Adam White, Martha White and Csaba Szepesvári.

This climate does not exist: Picturing impacts of the climate crisis with AI, one address at a time

Immerse yourself into this AI-driven virtual experience based on empathy to visualize the impacts of climate change on places you hold dear with Mila.

The bulletin also features AI stories from Canada and the US, as well as, events and job postings.

I found two different pages where you can subscribe. First, there’s this subscription page (which is at the bottom of the October 2021 bulletin and then, there’s this page, which requires more details from you.

I’ve taken a look at the CIFAR website and can’t find any of the previous bulletins on it, which would seem to make subscription the only means of access.

TRIUMF (Canada’s national particle accelerator centre) welcomes Nigel Smith as its new Chief Executive Officer (CEO) on May 17, 2021and some Hollywood news

I have two bits of news as noted in the headline. There’s news about TRIUMF located on the University of British Columbia (UBC) endowment lands and news about Dr. Suzanne Simard (UBC Forestry) and her memoir, Finding the Mother Tree: Discovering the Wisdom of the Fores.

Nigel Smith and TRIUMF (Canada’s national particle accelerator centre)

As soon as I saw his first name, Nigel, I bet myself he’d be from the UK (more about that later in this posting). This is TRIUMF’s third CEO since I started science blogging in May 2008. When I first started it was called TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics) but these days it’s TRIUMF (Canada’s national particle accelerator centre).

As for the organization’s latest CEO, here’s more from a TRIUMF February 12, 2021 announcement page ( the text is identical to TRIUMF’s February 12, 2021 press release),

Dr. Nigel Smith, Executive Director of SNOLAB, has been selected to serve as the next Director of TRIUMF.  

Succeeding Dr. Jonathan Bagger, who departed TRIUMF in January 2021 to become CEO of the American Physical Society, Dr. Smith’s appointment comes as the result of a highly competitive, six-month international search. Dr. Smith will begin his 5-year term as TRIUMF Director on May 17, 2021. 

“I am truly honoured to have been selected as the next Director of TRIUMF”, said Dr. Smith. “I have long been engaged with TRIUMF’s vibrant community and have been really impressed with the excellence of its science, capabilities and people. TRIUMF plays a unique and vital role in Canada’s research ecosystem and I look forward to help continue the legacy of excellence upheld by Dr. Jonathan Bagger and the previous TRIUMF Directors”.  

Describing what interested him in the position, Smith spoke to the breadth and impact of TRIUMF’s diverse science programs, stating “TRIUMF has an amazing portfolio of research covering fundamental and applied science that also delivers tangible societal impact through its range of medical and commercialisation initiatives. I am extremely excited to have the opportunity to lead a laboratory with such a broad and world-leading science program.” 

“Nigel brings all the necessary skills and background to the role of Director,” said Dr. Digvir Jayas, Interim Director of TRIUMF, Chair of the TRIUMF Board of Management, and Vice-President, Research and International at the University of Manitoba. “As Executive Director of SNOLAB, Dr. Smith is both a renowned researcher and experienced laboratory leader who offers a tremendous track record of success spanning the local, national, and international spheres. The Board of Management is thrilled to bring Nigel’s expertise to TRIUMF so he may help guide the laboratory through many of the exciting developments on the horizon.  

Dr. Smith joins TRIUMF at an important period in the laboratory’s history, moving into the second year of our current Five-Year Plan (2020-2025) and preparing to usher in a new era of science and innovation that will include the completion of the Advance Rare Isotope Laboratory (ARIEL) and the Institute for Advanced Medical Isotopes (IAMI) [not to be confused with Amii {Alberta Machine Intelligence Institute}]. This new infrastructure, alongside TRIUMF’s existing facilities and world-class research programs, will solidify Canada’s position as a global leader in both fundamental and applied research. 

Dr. Smith expressed his optimism for TRIUMF, saying “I am delighted to have this opportunity, and it will be a pleasure to lead the laboratory through this next exciting phase of our growth and evolution.” 

Smith is leaving what is probably one of the more unusual laboratories, at a depth of 2km, SNOLAB is the deepest, cleanest laboratory in the world. (more information either at SNOLAB or its Wikipedia entry.)

Is Smith from the UK? Some clues

I found my subsequent clues on SNOLAB’s ‘bio’ page for Dr. Nigel Smith,

Nigel Smith joined SNOLAB as Director during July 2009. He currently holds a full Professorship at Laurentian University, adjunct Professor status at Queen’s University, and a visiting Professorial chair at Imperial College, London. He received his Bachelor of Science in physics from Leeds University in the U.K. in 1985 and his Ph. D. in astrophysics from Leeds in 1991. He has served as a lecturer at Leeds University, a research associate at Imperial College London, group leader (dark matter) and deputy division head at the STFC Rutherford Appleton Laboratory, before relocating to Canada to oversee the SNOLAB deep underground facility.

The answer would seem to be yes, Nigel James Telfer Smith is originally from the UK.

I don’t know if this is going to be a trend but this is the second ‘Nigel” to lead TRIUMF. (The Nigels are now tied with the Johns and the Alans. Of course, the letter ‘j’ seems the most popular with four names, John, John, Jack, and Jonathan.) Here’s a list of TRIUMF’s previous CEOs (from the TRIUMF Wikipedia entry),

Since its inception, TRIUMF has had eight directors [now nine] overseeing its operations.

The first Nigel (Lockyer) is described as an American in his Wikipedia entry. He was born in Scotland and raised in Canada. However, he has spent the majority of his adult life in the US, other than the five or six years at TRIUMF. So, previous Nigel also started life in the UK.

Good luck to the new Nigel.

UBC forestry professor, Suzanne Simard’s memoir going to the movies?

Given that Simard’s memoir, Finding the Mother Tree: Discovering the Wisdom of the Forest, was published last week on May 4, 2021, this is very heady news,. From a May 12, 2021 article by Cassandra Gill for the Daily Hive (Note: Links have been removed),

Jake Gyllenhaal is bringing the story of a UBC professor to the big screen.

The Oscar nominee’s production company, Nine Stories, is producing a film based on Suzanne Simard’s memoir, Finding the Mother Tree.

Amy Adams is set to play Simard, who is a forest ecology expert renowned for her research on plants and fungi.

Adams is also co-producing the film with Gyllenhaal through her own company, Bond Group Entertainment.

The BC native [Simard] developed an interest in trees and the outdoors through her close relationship with her grandfather, who was a horse logger.

Her 30 year career and early life is documented in the memoir, which was released last week on May 4 [2021]. Simard explores how trees have evolved, have memories, and are the foundation of our planet’s ecosystem — along with her own personal experiences with grief.

The scientists’ [sic] influence has had influence in popular culture, notably in James Cameron’s 2009 film Avatar. The giant willow-like “Tree of Souls” was specifically inspired by Simard’s work.

No mention of a script and no mention of financing, so, it could be a while before we see the movie on Netflix, Apple+, HBO, or maybe a movie house (if they’re open by then).

I think the script may prove to the more challenging aspect of this project. Here’s the description of Simard’s memoir (from the Finding the Mother Tree webpage on suzannesimard.com)

From the world’s leading forest ecologist who forever changed how people view trees and their connections to one another and to other living things in the forest–a moving, deeply personal journey of discovery.

About the Book

In her first book, Simard brings us into her world, the intimate world of the trees, in which she brilliantly illuminates the fascinating and vital truths – that trees are not simply the source of timber or pulp, but are a complex, interdependent circle of life; that forests are social, cooperative creatures connected through underground networks by which trees communicate their vitality and vulnerabilities with communal lives not that different from our own.

Simard writes – in inspiring, illuminating, and accessible ways – how trees, living side by side for hundreds of years, have evolved, how they perceive one another, learn and adapt their behaviors, recognize neighbors, and remember the past; how they have agency about the future; elicit warnings and mount defenses, compete and cooperate with one another with sophistication, characteristics ascribed to human intelligence, traits that are the essence of civil societies – and at the center of it all, the Mother Trees: the mysterious, powerful forces that connect and sustain the others that surround them.

How does Simard’s process of understanding trees and conceptualizing a ‘mother tree’ get put into a script for a movie that’s not a documentary or an animation?

Movies are moving pictures, yes? How do you introduce movement and action in a script heavily focused on trees, which operate on a timescale that’s vastly different.

It’s an interesting problem and I look forward to seeing how it’s resolved. I wish them good luck.

Council of Canadian Academies and its expert panel for the AI for Science and Engineering project

There seems to be an explosion (metaphorically and only by Canadian standards) of interest in public perceptions/engagement/awareness of artificial intelligence (see my March 29, 2021 posting “Canada launches its AI dialogues” and these dialogues run until April 30, 2021 plus there’s this April 6, 2021 posting “UNESCO’s Call for Proposals to highlight blind spots in AI Development open ’til May 2, 2021” which was launched in cooperation with Mila-Québec Artificial Intelligence Institute).

Now there’s this, in a March 31, 2020 Council of Canadian Academies (CCA) news release, four new projects were announced. (Admittedly these are not ‘public engagement’ exercises as such but the reports are publicly available and utilized by policymakers.) These are the two projects of most interest to me,

Public Safety in the Digital Age

Information and communications technologies have profoundly changed almost every aspect of life and business in the last two decades. While the digital revolution has brought about many positive changes, it has also created opportunities for criminal organizations and malicious actors to target individuals, businesses, and systems.

This assessment will examine promising practices that could help to address threats to public safety related to the use of digital technologies while respecting human rights and privacy.

Sponsor: Public Safety Canada

AI for Science and Engineering

The use of artificial intelligence (AI) and machine learning in science and engineering has the potential to radically transform the nature of scientific inquiry and discovery and produce a wide range of social and economic benefits for Canadians. But, the adoption of these technologies also presents a number of potential challenges and risks.

This assessment will examine the legal/regulatory, ethical, policy and social challenges related to the use of AI technologies in scientific research and discovery.

Sponsor: National Research Council Canada [NRC] (co-sponsors: CIFAR [Canadian Institute for Advanced Research], CIHR [Canadian Institutes of Health Research], NSERC [Natural Sciences and Engineering Research Council], and SSHRC [Social Sciences and Humanities Research Council])

For today’s posting the focus will be on the AI project, specifically, the April 19, 2021 CCA news release announcing the project’s expert panel,

The Council of Canadian Academies (CCA) has formed an Expert Panel to examine a broad range of factors related to the use of artificial intelligence (AI) technologies in scientific research and discovery in Canada. Teresa Scassa, SJD, Canada Research Chair in Information Law and Policy at the University of Ottawa, will serve as Chair of the Panel.  

“AI and machine learning may drastically change the fields of science and engineering by accelerating research and discovery,” said Dr. Scassa. “But these technologies also present challenges and risks. A better understanding of the implications of the use of AI in scientific research will help to inform decision-making in this area and I look forward to undertaking this assessment with my colleagues.”

As Chair, Dr. Scassa will lead a multidisciplinary group with extensive expertise in law, policy, ethics, philosophy, sociology, and AI technology. The Panel will answer the following question:

What are the legal/regulatory, ethical, policy and social challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?

“We’re delighted that Dr. Scassa, with her extensive experience in AI, the law and data governance, has taken on the role of Chair,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “I anticipate the work of this outstanding panel will inform policy decisions about the development, regulation and adoption of AI technologies in scientific research, to the benefit of Canada.”

The CCA was asked by the National Research Council of Canada (NRC), along with co-sponsors CIFAR, CIHR, NSERC, and SSHRC, to address the question. More information can be found here.

The Expert Panel on AI for Science and Engineering:

Teresa Scassa (Chair), SJD, Canada Research Chair in Information Law and Policy, University of Ottawa, Faculty of Law (Ottawa, ON)

Julien Billot, CEO, Scale AI (Montreal, QC)

Wendy Hui Kyong Chun, Canada 150 Research Chair in New Media and Professor of Communication, Simon Fraser University (Burnaby, BC)

Marc Antoine Dilhac, Professor (Philosophy), University of Montreal; Director of Ethics and Politics, Centre for Ethics (Montréal, QC)

B. Courtney Doagoo, AI and Society Fellow, Centre for Law, Technology and Society, University of Ottawa; Senior Manager, Risk Consulting Practice, KPMG Canada (Ottawa, ON)

Abhishek Gupta, Founder and Principal Researcher, Montreal AI Ethics Institute (Montréal, QC)

Richard Isnor, Associate Vice President, Research and Graduate Studies, St. Francis Xavier University (Antigonish, NS)

Ross D. King, Professor, Chalmers University of Technology (Göteborg, Sweden)

Sabina Leonelli, Professor of Philosophy and History of Science, University of Exeter (Exeter, United Kingdom)

Raymond J. Spiteri, Professor, Department of Computer Science, University of Saskatchewan (Saskatoon, SK)

Who is the expert panel?

Putting together a Canadian panel is an interesting problem especially so when you’re trying to find people of expertise who can also represent various viewpoints both professionally and regionally. Then, there are gender, racial, linguistic, urban/rural, and ethnic considerations.

Statistics

Eight of the panelists could be said to be representing various regions of Canada. Five of those eight panelists are based in central Canada, specifically, Ontario (Ottawa) or Québec (Montréal). The sixth panelist is based in Atlantic Canada (Nova Scotia), the seventh panelist is based in the Prairies (Saskatchewan), and the eighth panelist is based in western Canada, (Vancouver, British Columbia).

The two panelists bringing an international perspective to this project are both based in Europe, specifically, Sweden and the UK.

(sigh) It would be good to have representation from another part of the world. Asia springs to mind as researchers in that region are very advanced in their AI research and applications meaning that their experts and ethicists are likely to have valuable insights.

Four of the ten panelists are women, which is closer to equal representation than some of the other CCA panels I’ve looked at.

As for Indigenous and BIPOC representation, unless one or more of the panelists chooses to self-identify in that fashion, I cannot make any comments. It should be noted that more than one expert panelist focuses on social justice and/or bias in algorithms.

Network of relationships

As you can see, the CCA descriptions for the individual members of the expert panel are a little brief. So, I did a little digging and In my searches, I noticed what seems to be a pattern of relationships among some of these experts. In particular, take note of the Canadian Institute for Advanced Research (CIFAR) and the AI Advisory Council of the Government of Canada.

Individual panelists

Teresa Scassa (Ontario) whose SJD designation signifies a research doctorate in law chairs this panel. Offhand, I can recall only one or two other panels being chaired by women of the 10 or so I’ve reviewed. In addition to her profile page at the University of Ottawa, she hosts her own blog featuring posts such as “How Might Bill C-11 Affect the Outcome of a Clearview AI-type Complaint?” She writes clearly (I didn’t seen any jargon) for an audience that is somewhat informed on the topic.

Along with Dilhac, Teresa Scassa is a member of the AI Advisory Council of the Government of Canada. More about that group when you read Dilhac’s description.

Julien Billot (Québec) has provided a profile on LinkedIn and you can augment your view of M. Billot with this profile from the CreativeDestructionLab (CDL),

Mr. Billot is a member of the faculty at HEC Montréal [graduate business school of the Université de Montréal] as an adjunct professor of management and the lead for the CreativeDestructionLab (CDL) and NextAi program in Montreal.

Julien Billot has been President and Chief Executive Officer of Yellow Pages Group Corporation (Y.TO) in Montreal, Quebec. Previously, he was Executive Vice President, Head of Media and Member of the Executive Committee of Solocal Group (formerly PagesJaunes Groupe), the publicly traded and incumbent local search business in France. Earlier experience includes serving as CEO of the digital and new business group of Lagardère Active, a multimedia branch of Lagardère Group and 13 years in senior management positions at France Telecom, notably as Chief Marketing Officer for Orange, the company’s mobile subsidiary.

Mr. Billot is a graduate of École Polytechnique (Paris) and from Telecom Paris Tech. He holds a postgraduate diploma (DEA) in Industrial Economics from the University of Paris-Dauphine.

Wendy Hui Kyong Chun (British Columbia) has a profile on the Simon Fraser University (SFU) website, which provided one of the more interesting (to me personally) biographies,

Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality [emphasis mine] to big data and network analytics.

I’m glad to see someone who has focused on ” … the centrality of race, gender, class and sexuality to big data and network analytics.” Even more interesting to me was this from her CV (curriculum vitae),

Professor, Department of Modern Culture and Media, Brown University, July 2010-June 2018

.•Affiliated Faculty, Multimedia & Electronic Music Experiments (MEME), Department of Music,2017.

•Affiliated Faculty, History of Art and Architecture, March 2012-

.•Graduate Field Faculty, Theatre Arts and Performance Studies, Sept 2008-.[sic]

….

[all emphases mine]

And these are some of her credentials,

Ph.D., English, Princeton University, 1999.
•Certificate, School of Criticism and Theory, Dartmouth College, Summer 1995.

M.A., English, Princeton University, 1994.

B.A.Sc., Systems Design Engineering and English, University of Waterloo, Canada, 1992.
•first class honours and a Senate Commendation for Excellence for being the first student to graduate from the School of Engineering with a double major

It’s about time the CCA started integrating some of kind of arts perspective into their projects. (Although, I can’t help wondering if this was by accident rather than by design.)

Marc Antoine Dilhac, an associate professor at l’Université de Montréal, he, like Billot, graduated from a French university, in his case, the Sorbonne. Here’s more from Dilhac’s profile on the Mila website,

Marc-Antoine Dilhac (Ph.D., Paris 1 Panthéon-Sorbonne) is a professor of ethics and political philosophy at the Université de Montréal and an associate member of Mila – Quebec Artificial Intelligence Institute. He currently holds a CIFAR [Canadian Institute for Advanced Research] Chair in AI ethics (2019-2024), and was previously Canada Research Chair in Public Ethics and Political Theory 2014-2019. He specialized in theories of democracy and social justice, as well as in questions of applied ethics. He published two books on the politics of toleration and inclusion (2013, 2014). His current research focuses on the ethical and social impacts of AI and issues of governance and institutional design, with a particular emphasis on how new technologies are changing public relations and political structures.

In 2017, he instigated the project of the Montreal Declaration for a Responsible Development of AI and chaired its scientific committee. In 2020, as director of Algora Lab, he led an international deliberation process as part of UNESCO’s consultation on its recommendation on the ethics of AI.

In 2019, he founded Algora Lab, an interdisciplinary laboratory advancing research on the ethics of AI and developing a deliberative approach to the governance of AI and digital technologies. He is co-director of Deliberation at the Observatory on the social impacts of AI and digital technologies (OBVIA), and contributes to the OECD Policy Observatory (OECD.AI) as a member of its expert network ONE.AI.

He sits on the AI Advisory Council of the Government of Canada and co-chair its Working Group on Public Awareness.

Formerly known as Mila only, Mila – Quebec Artificial Intelligence Institute is a beneficiary of the 2017 Canadian federal budget’s inception of the Pan-Canadian Artificial Intelligence Strategy, which named CIFAR as an agency that would benefit as the hub and would also distribute funds for artificial intelligence research to (mainly) three agencies: Mila in Montréal, the Vector Institute in Toronto, and the Alberta Machine Intelligence Institute (AMII; Edmonton).

Consequently, Dilhac’s involvement with CIFAR is not unexpected but when added to his presence on the AI Advisory Council of the Government of Canada and his role as co-chair of its Working Group on Public Awareness, one of the co-sponsors for this future CCA report, you get a sense of just how small the Canadian AI ethics and public awareness community is.

Add in CIFAR’s Open Dialogue: AI in Canada series (ongoing until April 30, 2021) which is being held in partnership with the AI Advisory Council of the Government of Canada (see my March 29, 2021 posting for more details about the dialogues) amongst other familiar parties and you see a web of relations so tightly interwoven that if you could produce masks from it you’d have superior COVID-19 protection to N95 masks.

These kinds of connections are understandable and I have more to say about them in my final comments.

B. Courtney Doagoo has a profile page at the University of Ottawa, which fills in a few information gaps,

As a Fellow, Dr. Doagoo develops her research on the social, economic and cultural implications of AI with a particular focus on the role of laws, norms and policies [emphasis mine]. She also notably advises Dr. Florian Martin-Bariteau, CLTS Director, in the development of a new research initiative on those topical issues, and Dr. Jason Millar in the development of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL).

Dr. Doagoo completed her Ph.D. in Law at the University of Ottawa in 2017. In her interdisciplinary research, she used empirical methods to learn about and describe the use of intellectual property law and norms in creative communities. Following her doctoral research, she joined the World Intellectual Property Organization’s Coordination Office in New York as a legal intern and contributed to developing the joint initiative on gender and innovation in collaboration with UNESCO and UN Women. She later joined the International Law Research Program at the Centre for International Governance Innovation as a Post-Doctoral Fellow, where she conducted research in technology and law focusing on intellectual property law, artificial intelligence and data governance.

Dr. Doagoo completed her LL.L. at the University of Ottawa, and LL.M. in Intellectual Property Law at the Benjamin N. Cardozo School of Law [a law school at Yeshiva University in New York City].  In between her academic pursuits, Dr. Doagoo has been involved with different technology start-ups, including the one she is currently leading aimed at facilitating access to legal services. She’s also an avid lover of the arts and designed a course on Arts and Cultural Heritage Law taught during her doctoral studies at the University of Ottawa, Faculty of Law.

It’s probably because I don’t know enough but this “the role of laws, norms and policies” seems bland to the point of meaningless. The rest is more informative and brings it back to the arts with Wendy Hui Kyong Chun at SFU.

Doagoo’s LinkedIn profile offers an unexpected link to this expert panel’s chairperson, Teresa Scassa (in addition to both being lawyers whose specialties are in related fields and on faculty or fellow at the University of Ottawa),

Soft-funded Research Bursary

Dr. Teresa Scassa

2014

I’m not suggesting any conspiracies; it’s simply that this is a very small community with much of it located in central and eastern Canada and possible links into the US. For example, Wendy Hui Kyong Chun, prior to her SFU appointment in December 2018, worked and studied in the eastern US for over 25 years after starting her academic career at the University of Waterloo (Ontario).

Abhishek Gupta provided me with a challenging search. His LinkedIn profile yielded some details (I’m not convinced the man sleeps), Note: I have made some formatting changes and removed the location, ‘Montréal area’ from some descriptions

Experience

Microsoft Graphic
Software Engineer II – Machine Learning
Microsoft

Jul 2018 – Present – 2 years 10 months

Machine Learning – Commercial Software Engineering team

Serves on the CSE Responsible AI Board

Founder and Principal Researcher
Montreal AI Ethics Institute

May 2018 – Present – 3 years

Institute creating tangible and practical research in the ethical, safe and inclusive development of AI. For more information, please visit https://montrealethics.ai

Visiting AI Ethics Researcher, Future of Work, International Visitor Leadership Program
U.S. Department of State

Aug 2019 – Present – 1 year 9 months

Selected to represent Canada on the future of work

Responsible AI Lead, Data Advisory Council
Northwest Commission on Colleges and Universities

Jun 2020 – Present – 11 months

Faculty Associate, Frankfurt Big Data Lab
Goethe University

Mar 2020 – Present – 1 year 2 months

Advisor for the Z-inspection project

Associate Member
LF AI Foundation

May 2020 – Present – 1 year

Author
MIT Technology Review

Sep 2020 – Present – 8 months

Founding Editorial Board Member, AI and Ethics Journal
Springer Nature

Jul 2020 – Present – 10 months

Education

McGill University Bachelor of Science (BS)Computer Science

2012 – 2015

Exhausting, eh? He also has an eponymous website and the Montreal AI Ethics Institute can found here where Gupta and his colleagues are “Democratizing AI ethics literacy.” My hat’s off to Gupta getting on an expert panel for CCA is quite an achievement for someone without the usual academic and/or industry trappings.

Richard Isnor, based in Nova Scotia and associate vice president of research & graduate studies at St. Francis Xavier University (StFX), seems to have some connection to northern Canada (see the reference to Nunavut Research Institute below); he’s certainly well connected to various federal government agencies according to his profile page,

Prior to joining StFX, he was Manager of the Atlantic Regional Office for the Natural Sciences and Engineering Research Council of Canada (NSERC), based in Moncton, NB.  Previously, he was Director of Innovation Policy and Science at the International Development Research Centre in Ottawa and also worked for three years with the National Research Council of Canada [NRC] managing Biotechnology Research Initiatives and the NRC Genomics and Health Initiative.

Richard holds a D. Phil. in Science and Technology Policy Studies from the University of Sussex, UK; a Master’s in Environmental Studies from Dalhousie University [Nova Scotia]; and a B. Sc. (Hons) in Biochemistry from Mount Allison University [New Burnswick].  His primary interest is in science policy and the public administration of research; he has worked in science and technology policy or research administrative positions for Environment Canada, Natural Resources Canada, the Privy Council Office, as well as the Nunavut Research Institute. [emphasis mine]

I don’t know what Dr. Isnor’s work is like but I’m hopeful he (along with Spiteri) will be able to provide a less ‘big city’ perspective to the proceedings.

(For those unfamiliar with Canadian cities, Montreal [three expert panelists] is the second largest city in the country, Ottawa [two expert panelists] as the capital has an outsize view of itself, Vancouver [one expert panelist] is the third or fourth largest city in the country for a total of six big city representatives out of eight Canadian expert panelists.)

Ross D. King, professor of machine intelligence at Sweden’s Chalmers University of Technology, might be best known for Adam, also known as, Robot Scientist. Here’s more about King, from his Wikipedia entry (Note: Links have been removed),

King completed a Bachelor of Science degree in Microbiology at the University of Aberdeen in 1983 and went on to study for a Master of Science degree in Computer Science at the University of Newcastle in 1985. Following this, he completed a PhD at The Turing Institute [emphasis mine] at the University of Strathclyde in 1989[3] for work on developing machine learning methods for protein structure prediction.[7]

King’s research interests are in the automation of science, drug design, AI, machine learning and synthetic biology.[8][9] He is probably best known for the Robot Scientist[4][10][11][12][13][14][15][16][17] project which has created a robot that can:

hypothesize to explain observations

devise experiments to test these hypotheses

physically run the experiments using laboratory robotics

interpret the results from the experiments

repeat the cycle as required

The Robot Scientist Wikipedia entry has this to add,

… a laboratory robot created and developed by a group of scientists including Ross King, Kenneth Whelan, Ffion Jones, Philip Reiser, Christopher Bryant, Stephen Muggleton, Douglas Kell and Steve Oliver.[2][6][7][8][9][10]

… Adam became the first machine in history to have discovered new scientific knowledge independently of its human creators.[5][17][18]

Sabina Leonelli, professor of philosophy and history of science at the University of Exeter, is the only person for whom I found a Twitter feed (@SabinaLeonelli). Here’s a bit more from her Wikipedia entry Note: Links have been removed),

Originally from Italy, Leonelli moved to the UK for a BSc degree in History, Philosophy and Social Studies of Science at University College London and a MSc degree in History and Philosophy of Science at the London School of Economics. Her doctoral research was carried out in the Netherlands at the Vrije Universiteit Amsterdam with Henk W. de Regt and Hans Radder. Before joining the Exeter faculty, she was a research officer under Mary S. Morgan at the Department of Economic History of the London School of Economics.

Leonelli is the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis)[3] and a Turing Fellow at the Alan Turing Institute [emphases mine] in London.[4] She is also Editor-in-Chief of the international journal History and Philosophy of the Life Sciences[5] and Associate Editor for the Harvard Data Science Review.[6] She serves as External Faculty for the Konrad Lorenz Institute for Evolution and Cognition Research.[7]

Notice that Ross King and Sabina Leonelli both have links to The Alan Turing Institute (“We believe data science and artificial intelligence will change the world”), although the institute’s link to the University of Strathclyde (Scotland) where King studied seems a bit tenuous.

Do check out Leonelli’s profile at the University of Exeter as it’s comprehensive.

Raymond J. Spiteri, professor and director of the Centre for High Performance Computing, Department of Computer Science at the University of Saskatchewan, has a profile page at the university the likes of which I haven’t seen in several years perhaps due to its 2013 origins. His other university profile page can best be described as minimalist.

His Canadian Applied and Industrial Mathematics Society (CAIMS) biography page could be described as less charming (to me) than the 2013 profile but it is easier to read,

Raymond Spiteri is a Professor in the Department of Computer Science at the University of Saskatchewan. He performed his graduate work as a member of the Institute for Applied Mathematics at the University of British Columbia. He was a post-doctoral fellow at McGill University and held faculty positions at Acadia University and Dalhousie University before joining USask in 2004. He serves on the Executive Committee of the WestGrid High-Performance Computing Consortium with Compute/Calcul Canada. He was a MITACS Project Leader from 2004-2012 and served in the role of Mitacs Regional Scientific Director for the Prairie Provinces between 2008 and 2011.

Spiteri’s areas of research are numerical analysis, scientific computing, and high-performance computing. His area of specialization is the analysis and implementation of efficient time-stepping methods for differential equations. He actively collaborates with scientists, engineers, and medical experts of all flavours. He also has a long record of industry collaboration with companies such as IBM and Boeing.

Spiteri has been lifetime member of CAIMS/SCMAI since 2000. He helped co-organize the 2004 Annual Meeting at Dalhousie and served on the Cecil Graham Doctoral Dissertation Award Committee from 2005 to 2009, acting as chair from 2007. He has been an active participant in CAIMS, serving several times on the Scientific Committee for the Annual Meeting, as well as frequently attending and organizing mini-symposia. Spiteri believes it is important for applied mathematics to play a major role in the efforts to meet Canada’s most pressing societal challenges, including the sustainability of our healthcare system, our natural resources, and the environment.

A last look at Spiteri’s 2013 profile gave me this (Note: Links have been removed),

Another biographical note: I obtained my B.Sc. degree in Applied Mathematics from the University of Western Ontario [also known as, Western University] in 1990. My advisor was Dr. M.A.H. (Paddy) Nerenberg, after whom the Nerenberg Lecture Series is named. Here is an excerpt from the description, put here is his honour, as a model for the rest of us:

The Nerenberg Lecture Series is first and foremost about people and ideas. Knowledge is the true treasure of humanity, accrued and passed down through the generations. Some of it, particularly science and its language, mathematics, is closed in practice to many because of technical barriers that can only be overcome at a high price. These technical barriers form part of the remarkable fractures that have formed in our legacy of knowledge. We are so used to those fractures that they have become almost invisible to us, but they are a source of profound confusion about what is known.

The Nerenberg Lecture is named after the late Morton (Paddy) Nerenberg, a much-loved professor and researcher born on 17 March– hence his nickname. He was a Professor at Western for more than a quarter century, and a founding member of the Department of Applied Mathematics there. A successful researcher and accomplished teacher, he believed in the unity of knowledge, that scientific and mathematical ideas belong to everyone, and that they are of human importance. He regretted that they had become inaccessible to so many, and anticipated serious consequences from it. [emphases mine] The series honors his appreciation for the democracy of ideas. He died in 1993 at the age of 57.

So, we have the expert panel.

Thoughts about the panel and the report

As I’ve noted previously here and elsewhere, assembling any panels whether they’re for a single event or for a longer term project such as producing a report is no easy task. Looking at the panel, there’s some arts representation, smaller urban centres are also represented, and some of the members have experience in more than one region in Canada. I was also much encouraged by Spiteri’s acknowledgement of his advisor’s, Morton (Paddy) Nerenberg, passionate commitment to the idea that “scientific and mathematical ideas belong to everyone.”

Kudos to the Council of Canadian Academies (CCA) organizers.

That said, this looks like an exceptionally Eurocentric panel. Unusually, there’s no representation from the US unless you count Chun who has spent the majority of her career in the US with only a little over two years at Simon Fraser University on Canada’s West Coast.

There’s weakness to a strategy (none of the ten or so CCA reports I’ve reviewed here deviates from this pattern) that seems to favour international participants from Europe and/or the US (also, sometimes, Australia/New Zealand). This leaves out giant chunks of the international community and brings us dangerously close to an echo chamber.

The same problem exists regionally and with various Canadian communities, which are acknowledged more in spirit than in actuality, e.g., the North, rural, indigenous, arts, etc.

Getting back to the ‘big city’ emphsais noted earlier, two people from Ottawa and three from Montreal; half of the expert panel lives within a two hour train ride of each other. (For those who don’t know, that’s close by Canadian standards. For comparison, a train ride from Vancouver to Seattle [US] is about four hours, a short trip when compared to a 24 hour train trip to the closest large Canadian cities.)

I appreciate that it’s not a simple problem but my concern is that it’s never acknowledged by the CCA. Perhaps they could include a section in the report acknowledging the issues and how the expert panel attempted to address them , in other words, transparency. Coincidentally, transparency, which has been related to trust, have both been identified as big issues with artificial intelligence.

As for solutions, these reports get sent to external reviewers and, prior to the report, outside experts are sometimes brought in as the panel readies itself. That would be two opportunities afforded by their current processes.

Anyway, good luck with the report and I look forward to seeing it.

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.