Tag Archives: Mila (Quebec's artificial intelligence research institute)

Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)

Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.

Months after the first reading in June 2022, Bill C-27 was mentioned here in a September 15, 2022 posting about a Canadian Science Policy Centre (CSPC) event featuring a panel discussion about the proposed legislation, artificial intelligence in particular. I dug down and found commentaries and additional information about the proposed bill with special attention to AIDA.

it seems discussion has been reactivated since the second reading was completed on April 24, 2023 and referred to committee for further discussion. (A report and third reading are still to be had in the House of Commons and then, there are three readings in the Senate before this legislation can be passed.)

Christian Paas-Lang has written an April 24, 2023 article for CBC (Canadian Broadcasting Corporation) news online that highlights concerns centred on AI from three cross-party Members of Parliament (MPs),

Once the domain of a relatively select group of tech workers, academics and science fiction enthusiasts, the debate over the future of artificial intelligence has been thrust into the mainstream. And a group of cross-party MPs say Canada isn’t yet ready to take on the challenge.

The popularization of AI as a subject of concern has been accelerated by the introduction of ChatGPT, an AI chatbot produced by OpenAI that is capable of generating a broad array of text, code and other content. ChatGPT relies on content published on the internet as well as training from its users to improve its responses.

ChatGPT has prompted such a fervour, said Katrina Ingram, founder of the group Ethically Aligned AI, because of its novelty and effectiveness. 

“I would argue that we’ve had AI enabled infrastructure or technologies around for quite a while now, but we haven’t really necessarily been confronted with them, you know, face to face,” she told CBC Radio’s The House [radio segment embedded in article] in an interview that aired Saturday [April 22, 2023].

Ingram said the technology has prompted a series of concerns: about the livelihoods of professionals like artists and writers, about privacy, data collection and surveillance and about whether chatbots like ChatGPT can be used as tools for disinformation.

With the popularization of AI as an issue has come a similar increase in concern about regulation, and Ingram says governments must act now.

“We are contending with these technologies right now. So it’s really imperative that governments are able to pick up the pace,” she told host Catherine Cullen.

That sentiment — the need for speed — is one shared by three MPs from across party lines who are watching the development of the AI issue. Conservative MP Michelle Rempel Garner, NDP MP Brian Masse and Nathaniel Erskine-Smith of the Liberals also joined The House for an interview that aired Saturday.

“This is huge. This is the new oil,” said Masse, the NDP’s industry critic, referring to how oil had fundamentally shifted economic and geopolitical relationships, leading to a great deal of good but also disasters — and AI could do the same.

Issues of both speed and substance

The three MPs are closely watching Bill C-27, a piece of legislation currently being debated in the House of Commons that includes Canada’s first federal regulations on AI.

But each MP expressed concern that the bill may not be ready in time and changes would be needed [emphasis mine].

“This legislation was tabled in June of last year [2022], six months before ChatGPT was released and it’s like it’s obsolete. It’s like putting in place a framework to regulate scribes four months after the printing press came out,” Rempel Garner said. She added that it was wrongheaded to move the discussion of AI away from Parliament and segment it off to a regulatory body.

Am I the only person who sees a problem with the “bill may not be ready in time and changes would be needed?” I don’t understand the rush (or how these people get elected). The point of a bill is to examine the ideas and make changes to it before it becomes legislation. Given how fluid the situation appears to be, a strong argument can be made for the current process which is three readings in the House of Commons, along with a committee report, and three readings in the senate before a bill, if successful, is passed into legislation.

Of course, the fluidity of the situation could also be an argument for starting over as Michael Geist’s (Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa and member of the Centre for Law, Technology and Society) April 19, 2023 post on his eponymous blog suggests, Note: Links have been removed,

As anyone who has tried ChatGPT will know, at the bottom of each response is an option to ask the AI system to “regenerate response”. Despite increasing pressure on the government to move ahead with Bill C-27’s Artificial Intelligence and Data Act (AIDA), the right response would be to hit the regenerate button and start over. AIDA may be well-meaning and the issue of AI regulation critically important, but the bill is limited in principles and severely lacking in detail, leaving virtually all of the heavy lifting to a regulation-making process that will take years to unfold. While no one should doubt the importance of AI regulation, Canadians deserve better than virtue signalling on the issue with a bill that never received a full public consultation.

What prompts this post is a public letter based out of MILA that calls on the government to urgently move ahead with the bill signed by some of Canada’s leading AI experts. The letter states: …

When the signatories to the letter suggest that there is prospect of moving AIDA forward before the summer, it feels like a ChatGPT error. There are a maximum of 43 days left on the House of Commons calendar until the summer. In all likelihood, it will be less than that. Bill C-27 is really three bills in one: major privacy reform, the creation of a new privacy tribunal, and AI regulation. I’ve watched the progress of enough bills to know that this just isn’t enough time to conduct extensive hearings on the bill, conduct a full clause-by-clause review, debate and vote in the House, and then conduct another review in the Senate. At best, Bill C-27 could make some headway at committee, but getting it passed with a proper review is unrealistic.

Moreover, I am deeply concerned about a Parliamentary process that could lump together these three bills in an expedited process. …

For anyone unfamiliar with MILA, it is also known as Quebec’s Artificial Intelligence Institute. (They seem to have replaced institute with ecosystem since the last time I checked.) You can see the document and list of signatories here.

Geist has a number of posts and podcasts focused on the bill and the easiest way to find them is to use the search term ‘Bill C-27’.

Maggie Arai at the University of Toronto’s Schwartz Reisman Institute for Technology and Society provides a brief overview titled, Five things to know about Bill C-27, in her April 18, 2022 commentary,

On June 16, 2022, the Canadian federal government introduced Bill C-27, the Digital Charter Implementation Act 2022, in the House of Commons. Bill C-27 is not entirely new, following in the footsteps of Bill C-11 (the Digital Charter Implementation Act 2020). Bill C-11 failed to pass, dying on the Order Paper when the Governor General dissolved Parliament to hold the 2021 federal election. While some aspects of C-27 will likely be familiar to those who followed the progress of Bill C-11, there are several key differences.

After noting the differences, Arai had this to say, from her April 18, 2022 commentary,

The tabling of Bill C-27 represents an exciting step forward for Canada as it attempts to forge a path towards regulating AI that will promote innovation of this advanced technology, while simultaneously offering consumers assurance and protection from the unique risks this new technology it poses. This second attempt towards the CPPA and PIDPTA is similarly positive, and addresses the need for updated and increased consumer protection, privacy, and data legislation.

However, as the saying goes, the devil is in the details. As we have outlined, several aspects of how Bill C-27 will be implemented are yet to be defined, and how the legislation will interact with existing social, economic, and legal dynamics also remains to be seen.

There are also sections of C-27 that could be improved, including areas where policymakers could benefit from the insights of researchers with domain expertise in areas such as data privacy, trusted computing, platform governance, and the social impacts of new technologies. In the coming weeks, the Schwartz Reisman Institute will present additional commentaries from our community that explore the implications of C-27 for Canadians when it comes to privacy, protection against harms, and technological governance.

Bryan Short’s September 14, 2022 posting (The Absolute Bare Minimum: Privacy and the New Bill C-27) on the Open Media website critiques two of the three bills included in Bill C-27, Note: Links have been removed,

The Canadian government has taken the first step towards creating new privacy rights for people in Canada. After a failed attempt in 2020 and three years of inaction since the proposal of the digital charter, the government has tabled another piece of legislation aimed at giving people in Canada the privacy rights they deserve.

In this post, we’ll explore how Bill C-27 compares to Canada’s current privacy legislation, how it stacks up against our international peers, and what it means for you. This post considers two of the three acts being proposed in Bill C-27, the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Tribunal Act (PIDTA), and doesn’t discuss the Artificial Intelligence and Data Act [emphasis mine]. The latter Act’s engagement with very new and complex issues means we think it deserves its own consideration separate from existing privacy proposals, and will handle it as such.

If we were to give Bill C-27’s CPPA and PIDTA a grade, it’d be a D. This is legislation that does the absolute bare minimum for privacy protections in Canada, and in some cases it will make things actually worse. If they were proposed and passed a decade ago, we might have rated it higher. However, looking ahead at predictable movement in data practices over the next ten – or even twenty – years, these laws will be out of date the moment they are passed, and leave people in Canada vulnerable to a wide range of predatory data practices. For detailed analysis, read on – but if you’re ready to raise your voice, go check out our action calling for positive change before C-27 passes!

Taking this all into account, Bill C-27 isn’t yet the step forward for privacy in Canada that we need. While it’s an improvement upon the last privacy bill that the government put forward, it misses so many areas that are critical for improvement, like failing to put people in Canada above the commercial interests of companies.

If Open Media has followed up with an AIDA critique, I have not been able to find it on their website.

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

A newsletter from the Pan-Canadian AI strategy folks

The AICan (Artificial Intelligence Canada) Bulletin is published by CIFAR (Canadian Institute For Advanced Research) and it is the official newsletter for the Pan-Canadian AI Strategy. This is a joint production from CIFAR, Amii (Alberta Machine Intelligence Institute), Mila (Quebec’s Artificial Intelligence research institute) and the Vector Institute for Artificial Intelligence (Toronto, Ontario).

For anyone curious about the Pan-Canadian Artificial Intelligence Strategy, first announced in the 2017 federal budget, I have a March 31, 2017 post which focuses heavily on the, then new, Vector Institute but it also contains information about the artificial intelligence scene in Canada at the time, which is at least in part still relevant today.

The AICan Bulletin October 2021 issue number 16 (The Energy and Environment Issue) is available for viewing here and includes these articles,

Equity, diversity and inclusion in AI climate change research

The effects of climate change significantly impact our most vulnerable populations. Canada CIFAR AI Chair David Rolnick (Mila) and Tami Vasanthakumaran (Girls Belong Here) share their insights and call to action for the AI research community.

Predicting the perfect storm

Canada CIFAR AI Chair Samira Kahou (Mila) is using AI to detect and predict extreme weather events to aid in disaster management and raise awareness for the climate crisis.

AI in biodiversity is crucial to our survival

Graham Taylor, a Canada CIFAR AI Chair at the Vector Institute, is using machine learning to build an inventory of life on Earth with DNA barcoding.

ISL Adapt uses ML to make water treatment cleaner & greener

Amii, the University of Alberta, and ISL Engineering explores how machine learning can make water treatment more environmentally friendly and cost-effective with the support of Amii Fellows and Canada CIFAR AI Chairs — Adam White, Martha White and Csaba Szepesvári.

This climate does not exist: Picturing impacts of the climate crisis with AI, one address at a time

Immerse yourself into this AI-driven virtual experience based on empathy to visualize the impacts of climate change on places you hold dear with Mila.

The bulletin also features AI stories from Canada and the US, as well as, events and job postings.

I found two different pages where you can subscribe. First, there’s this subscription page (which is at the bottom of the October 2021 bulletin and then, there’s this page, which requires more details from you.

I’ve taken a look at the CIFAR website and can’t find any of the previous bulletins on it, which would seem to make subscription the only means of access.