Category Archives: Technology

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

it was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the avtsim.com/weak-ai-strong-ai webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.

….

My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on mindmatters.ai.)

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from https://en.wikipedia.org/wiki/Vancouver_Art_Gallery#cite_note-canen-7]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 duckduckgo.com search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″

noun

1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the cosmolocal.org website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the cyberneticzoo.com September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from https://cyberneticzoo.com/cyberneticanimals/1949-wieners-moth-wiener-wiesner-singleton/]

Sculpture

This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from https://www.media.mit.edu/projects/synthetic-apiary/overview/]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing

Given all the concern over rising inflation (McGill University press room, February 23, 2022 “Experts: Canadian inflation hits a new three-decade high” and Bank of Canada rates (Pete Evans in an April 13, 2022 article for the Canadian Broadcasting Corporation’s online news), this news release was a little unexpected both for timing (one week after the 2022 Canadian federal budget was delivered) and content (from an April 14, 2022 HKA Marketing Communications news release on EurekAlert),

Multiverse Computing, a global leader in quantum computing solutions for the financial industry and beyond with offices in Toronto and Spain, today announced it has completed a proof-of-concept project with the Bank of Canada through which the parties used quantum computing to simulate the adoption of cryptocurrency as a method of payment by non-financial firms.

“We are proud to be a trusted partner of the first G7 central bank to explore modelling of complex networks and cryptocurrencies through the use of quantum computing,” said Sam Mugel, CTO [Chief Technical Officer] at Multiverse Computing. “The results of the simulation are very intriguing and insightful as stakeholders consider further research in the domain. Thanks to the algorithm we developed together with our partners at the Bank of Canada, we have been able to model a complex system reliably and accurately given the current state of quantum computing capabilities.”

Companies may adopt various forms of payments. So, it’s important to develop a deep understanding of interactions that can take place in payments networks.

Multiverse Computing conducted its innovative work related to applying quantum computing for modelling complex economic interactions in a research project with the Bank of Canada. The project explored quantum computing technology as a way to simulate complex economic behaviour that is otherwise very difficult to simulate using traditional computational techniques.

By implementing this solution using D-Wave’s annealing quantum computer, the simulation was able to tackle financial networks as large as 8-10 players, with up to 2^90 possible network configurations. Note that classical computing approaches cannot solve large networks of practical relevance as a 15-player network requires as many resources as there are atoms in the universe.

“We wanted to test the power of quantum computing on a research case that is hard to solve using classical computing techniques,” said Maryam Haghighi, Director, Data Science at the Bank of Canada. “This collaboration helped us learn more about how quantum computing can provide new insights into economic problems by carrying out complex simulations on quantum hardware.”

Motivated by the empirical observations about the cooperative nature of adoption of cryptocurrency payments, this theoretical study found that for some industries, these digital assets would share the payments market with traditional bank transfers and cash-like instruments. The market share for each would depend on how the financial institutions respond to the cryptocurrency adoptions, and on the economic costs associated with such trades.

The quantum simulations helped generate examples that illustrate how similar firms may end up adopting different levels of cryptocurrency use.

About Multiverse Computing

Multiverse Computing is a leading quantum software company that applies quantum and quantum-inspired solutions to tackle complex problems in finance to deliver value today and enable a more resilient and prosperous economy. The company’s expertise in quantum control and computational methods as well as finance means it can secure maximum results from current quantum devices. Its flagship product, Singularity, allows financial professionals to leverage quantum computing with common software tools.  The company is targeting additional verticals as well, including mobility, energy, the life sciences and industry 4.0.

Contacts:

Multiverse Computing
www.multiversecomputing.com
contact@multiversecomputing.com
+346 60 94 11 54

I wish there was a little more information about the contents of the report (although it is nice to know they have one).

D-Wave Systems, for those who don’t know, is a Vancouver-area company that supplies hardware (here’s more from their Wikipedia entry), Note: Links have been removed,

D-Wave Systems Inc. is a Canadian quantum computing company, based in Burnaby, British Columbia, Canada. D-Wave was the world’s first company to sell computers to exploit quantum effects in their operation.[2] D-Wave’s early customers include Lockheed Martin, University of Southern California, Google/NASA and Los Alamos National Lab.

The company has to this point specialized in quantum annealing. This is a specific type of quantum computing best used to solve the kind of problem (analyzing a multi-player situation) that the Bank of Canada was trying to solve.

I checked out ‘Multiverse’ in Toronto and they claim this, “World leaders in quantum computing for the financial industry,” on their homepage.

As for the company that produced the news release, HKA Marketing Communications, based in Southern California, they claim this “Specialists in Quantum Tech PR: #1 agency in this space” on their homepage.

I checked out the Bank of Canada website and didn’t find anything about this project.

STEM (science, technology, engineering and math) brings life to the global hit television series “The Walking Dead” and a Canadian AI initiative for women and diversity

I stumbled across this June 8, 2022 AMC Networks news release in the last place I was expecting (i.e., a self-described global entertainment company’s website) to see a STEM (science, technology, engineering, and mathematics) announcement,

AMC NETWORKS CONTENT ROOM TEAMS WITH THE AD COUNCIL TO EMPOWER GIRLS IN STEM, FEATURING “THE WALKING DEAD”

AMC Networks Content Room and the Ad Council, a non-profit and leading producer of social impact campaigns for 80 years, announced today a series of new public service advertisements (PSAs) that will highlight the power of girls in STEM (science, technology, engineering and math) against the backdrop of the global hit series “The Walking Dead.”  In the spots, behind-the-scenes talent of the popular franchise, including Director Aisha Tyler, Costume Designer Vera Chow and Art Director Jasmine Garnet, showcase how STEM is used to bring the post-apocalyptic world of “The Walking Dead” to life on screen.  Created by AMC Networks Content Room, the PSAs are part of the Ad Council’s national She Can STEM campaign, which encourages girls, trans youth and non-binary youth around the country to get excited about and interested in STEM.

The new creative consists of TV spots and custom videos created specifically for TikTok and Instagram.  The spots also feature Gitanjali Rao, a 16-year-old scientist, inventor and activist, interviewing Tyler, Chow and Garnet discussing how they and their teams use STEM in the production of “The Walking Dead.”  Using before and after visuals, each piece highlights the unique and unexpected uses of STEM in the making of the series.  In addition to being part of the larger Ad Council campaign, the spots will be available on “The Walking Dead’s” social media platforms, including Facebook, Instagram, Twitter and YouTube pages, and across AMC Networks linear channels and digital platforms.

PSA:   https://youtu.be/V20HO-tUO18

Social: https://youtu.be/LnDwmZrx6lI

Said Kim Granito, EVP of AMC Networks Content Room: “We are thrilled to partner with the Ad Council to inspire young girls in STEM through the unexpected backdrop of ‘The Walking Dead.’  Over the last 11 years, this universe has been created by an array of insanely talented women that utilize STEM every day in their roles.  This campaign will broaden perceptions of STEM beyond the stereotypes of lab coats and beakers, and hopefully inspire the next generation of talented women in STEM.  Aisha Tyler, Vera Chow and Jasmine Garnet were a dream to work with and their shared enthusiasm for this mission is inspiring.”

“Careers in STEM are varied and can touch all aspects of our lives. We are proud to partner with AMC Networks Content Room on this latest work for the She Can STEM campaign. With it, we hope to inspire young girls, non-binary youth, and trans youth to recognize that their passion for STEM can impact countless industries – including the entertainment industry,” said Michelle Hillman, Chief Campaign Development Officer, Ad Council.

Women make up nearly half of the total college-educated workforce in the U.S., but they only constitute 27% of the STEM workforce, according to the U.S. Census Bureau. Research shows that many girls lose interest in STEM as early as middle school, and this path continues through high school and college, ultimately leading to an underrepresentation of women in STEM careers.  She Can STEM aims to dismantle the intimidating perceived barrier of STEM fields by showing girls, non-binary youth, and trans youth how fun, messy, diverse and accessible STEM can be, encouraging them to dive in, no matter where they are in their STEM journey.

Since the launch of She Can STEM in September 2018, the campaign has been supported by a variety of corporate, non-profit and media partners. The current funder of the campaign is IF/THEN, an initiative of Lyda Hill Philanthropies.  Non-profit partners include Black Girls Code, ChickTech, Girl Scouts of the USA, Girls Inc., Girls Who Code, National Center for Women & Information Technology, The New York Academy of Sciences and Society of Women Engineers.

About AMC Networks Inc.

AMC Networks (Nasdaq: AMCX) is a global entertainment company known for its popular and critically-acclaimed content. Its brands include targeted streaming services AMC+, Acorn TV, Shudder, Sundance Now, ALLBLK, and the newest addition to its targeted streaming portfolio, the anime-focused HIDIVE streaming service, in addition to AMC, BBC AMERICA (operated through a joint venture with BBC Studios), IFC, SundanceTV, WE tv and IFC Films. AMC Studios, the Company’s in-house studio, production and distribution operation, is behind some of the biggest titles and brands known to a global audience, including The Walking Dead, the Anne Rice catalog and the Agatha Christie library.  The Company also operates AMC Networks International, its international programming business, and 25/7 Media, its production services business.

About Content Room

Content Room is AMC Networks’ award-winning branded entertainment studio that collaborates with advertising partners to build brand stories and create bespoke experiences across an expanding range of digital, social, and linear platforms. Content Room enables brands to fully tap into the company’s premium programming, distinct IP, deep talent roster and filmmaking roots through an array of creative partnership opportunities— from premium branded content and integrations— to franchise and gaming extensions.

Content Room is also home to the award-winning digital content studio which produces dozens of original series annually, which expands popular AMC Networks scripted programming for both fans and advertising partners by leveraging the built-in massive series and talent fandoms.

The Ad Council
The Ad Council is where creativity and causes converge. The non-profit organization brings together the most creative minds in advertising, media, technology and marketing to address many of the nation’s most important causes. The Ad Council has created many of the most iconic campaigns in advertising history. Friends Don’t Let Friends Drive Drunk. Smokey Bear. Love Has No Labels.

The Ad Council’s innovative social good campaigns raise awareness, inspire action and save lives. To learn more, visit AdCouncil.org, follow the Ad Council’s communities on Facebook and Twitter, and view the creative on YouTube.

You can find the ‘She Can Stem’ Ad Council initiative here.

Canadian women and the AI4Good Lab

A June 9, 2022 posting on the Borealis AI website describes an artificial intelligence (AI) initiative designed to encourage women to enter the field,

The AI4Good Lab is one of those programs that creates exponential opportunities. As the leading Canadian AI-training initiative for women-identified STEM students, the lab helps encourage diversity in the field of AI. Participants work together to use AI to solve a social problem, delivering untold benefits to their local communities. And they work shoulder-to-shoulder with other leaders in the field of AI, building their networks and expanding the ecosystem.

At this year’s [2022] AI4Good Lab Industry Night, program partners – like Borealis AI, RBC [Royal Bank of Canada], DeepMind, Ivado and Google – had an opportunity to (virtually) meet the nearly 90  participants of this year’s program. Many of the program’s alumni were also in attendance. So, too, were representatives from CIFAR [Canadian Institute for Advanced Research], one of Canada’s leading global research organizations.

Industry participants – including Dr. Eirene Seiradaki, Director of Research Partnerships at Borealis AI, Carey Mende-Gibson, RBC’s Location Intelligence ambassador, and Lucy Liu, Director of Data Science at RBC – talked with attendees about their experiences in the AI industry, discussed career opportunities and explored various career paths that the participants could take in the industry. For the entire two hours, our three tables  and our virtually cozy couches were filled to capacity. It was only after the end of the event that we had the chance to exchange visits to the tables of our partners from CIFAR and AMII [Alberta Machine Intelligence Institute]. Eirene did not miss the opportunity to catch up with our good friend, Warren Johnston, and hear first-hand the news from AMII’s recent AI Week 2022.

Borealis AI is funded by the Royal Bank of Canada. Somebody wrote this for the homepage (presumably tongue in cheek),

All you can bank on.

The AI4Good Lab can be found here,

The AI4Good Lab is a 7-week program that equips women and people of marginalized genders with the skills to build their own machine learning projects. We emphasize mentorship and curiosity-driven learning to prepare our participants for a career in AI.

The program is designed to open doors for those who have historically been underrepresented in the AI industry. Together, we are building a more inclusive and diverse tech culture in Canada while inspiring the next generation of leaders to use AI as a tool for social good.

A most recent programme ran (May 3 – June 21, 2022) in Montréal, Toronto, and Edmonton.

There are a number of AI for Good initiatives including this one from the International Telecommunications Union (a United Nations Agency).

For the curious, I have a May 10, 2018 post “The Royal Bank of Canada reports ‘Humans wanted’ and some thoughts on the future of work, robots, and artificial intelligence” where I ‘examine’ RBC and its AI initiatives.

Hydrogen In Motion (H2M), its solid state hydrogen storage nanomaterial, and running for Vancouver (Canada) City Council?

Vancouver city politics don’t usually feature here. but this June 13 ,2022 article by Kenneth Chan for the Daily Hive suggests that might be changing,

Colleen Hardwick’s TEAM for a Livable Vancouver party has officially nominated six candidates to fill Vancouver city councillor seats in the upcoming civic election.

….

Grace Quan is a co-founder and the head of Hydrogen In Motion, which specializes in developing a nanomaterial to store hydrogen [emphasis mine]. She previously worked for the Canadian International Development Agency and in the Foreign Service and served as a senior advisor to the CFO of the Treasury Board of Canada.

There’s not a lot of detail in the description which is reasonable considering five other candidates were being announced.

Since this blog is focused on nanotechnology and other emerging technologies, the word ‘nanomaterial’ popped out. Its use in the candidate’s description is close to meaningless, similar to saying that your storage container is made from a material. In this case, the material (presumably) is exploiting advantages found at the nanoscale. As for Quan, the work experience cited highlights experience working in government agencies but doesn’t include any technology development.

My main interest is the technology followed by the business aspects. As for why Quan is running for political office and how she will find the time; I can only offer speculation.

Hydrogen in Motion’s storage technology

Obviously the place to look is the Hydrogen in Motion (H2M) website. Descriptions of their technology are vague (from the company’s Hydrogen page),

Hydrogen In Motion solution is leading a breakthrough in solid state hydrogen storage nanomaterial. H2M hydrogen storage redefines the use of hydrogen fuel technologies and simplifying its logistical applications. Our technology offers hydrogen energy solution that has positive economic and environmental impact and provides an infinite source of constant energy with no emissions, low cost commitment and versatility with compact storage. Our technology solution has resolved the constraints currently burdening the hydrogen economy, making it the most viable solution for commercialization of future clean energy.

Which nanomaterial(s) are they using? Carbon nanotubes, graphene, gold nanoparticles, borophene, perovskite, fullerenes, etc.? The company’s Products page offers a little more information and some diagrams,

H2M fuel cell technology is well-adapted for a wide range of applications, from nomadic to stationary, enabling for easy transition to emission free systems. As the H2M nanomaterial is conformable, H2M hydrogen storage containers can be shaped to meet the application requirements; from extending flight duration for drones to grid scale renewable energy storage for solar, wind, and wave. H2M is the most effective Hydrogen storage ever designed.

There are no product names nor pictures of products other than this, which is in the banner,

[downloaded from https://www.hydrogeninmotion.com/products/]

No names, no branding, no product specifications.

Unusually for a startup, neither member of the executive team seems to have been the scientist who developed or is developing the nanomaterial for this technology. Also unusual, there’s not a scientific advisory board. Grace Quan has credentials as a Certified Public Accountant (CPA) and holds a Master of Business Administration (MB). Plus there’s this from the About Us page,

Grace has over 25 years of experience spanning a wealth of sectors including government – Federal Government of Canada, the Provincial Government (Minister’s Office) of Alberta; Academia – University of British Columbia, and Management of a Flying School; Not-for-Profit / Research Funding Agency – Genome British Columbia; and private sector with various management positions. Grace is well positioned to lead H2M in navigating the complicated world of Federal and Provincial politics and program funding requirements. At the same time Grace’s skills and expertise in the private sector will be invaluable in providing strategic direction in the marketing, finance, human resource, and production domains.

The other member of the executive team, Mark Cannon, the chief technical officer, has a Master of Science and a Bachelor of Mathematics. Plus there’s this from the About Us page,

Mark has over thirty years of experience commercializing academic developments, covering such diverse fields as: real time vision analysis, electromagnetic measurement and simulation, Computer Aided Design of printed circuit boards and microchips, custom integrated semiconductor chips for encryption, optical fibre signal measurement and recovery, and building energy management systems. He has worked at major research and development companies such as Systemhouse, Bell-Northern Research (later absorbed by Nortel), and Cadence Design Systems. Mark is very familiar with technology startups, the exigencies of entrepreneurship, and the business cycle of introducing new products into the market having cofounded two successful start-ups: Unicad Inc. (bought by Cooper & Chyan Technologies) and Viewnyx Corporation. He has also held key roles in two other start-ups, Chrysalis ITS and Optovation Inc.

His experience seems almost entirely focused on electronics and optics. It’s not clear to me how this experience is transferable to hydrogen storage and nanomaterials. (As well, his TechCrunch profile lists him as having founded one company rather than the three listed in his company’s profile.)

The company’s R&D page offers an overview of the process, the skills needed to conduct the research, and some quite interesting details about hydrogen storage but no scientific papers,

Conceive/Improve Theoretical Modelling

The theoretical team uses physical chemical theory starting at the quantum level using density functional theory (DFT) to model material composed of the elements that provide a structure and attract hydrogen. Once the theoretical material has been tested on that scale, further models are built using Molecular dynamics, thermodynamic modeling and finally computational fluid dynamic modeling. The team continuously provide support by modeling the different stages of synthesis to determine the optimal parameters required to achieve the correct synthesis.

Material Synthesis

The synthesis team uses a variety of chemical and physical state alteration techniques to synthesize the desired material. Series of experiments are devised to build the desired material usually one stage at a time. Usually a series of experiments are planned to determine key synthesis parameters that effect the material. Once a base material is completed, a series of experiments is devised and repeated to bring it to the next stage.

Characterization

Test Hydrogen Absorption & Desorption

Ultimately, the material’s performance is based on the results from the H2MS hydrogen measurement system. Once a material has been successfully synthesized and validated using the H2MS, multiple measurements are made at different temperatures for multiple cycles. This validates the robustness, operating range, and re-usability of the hydrogen storage material. For our first material [emphasis mine], a scale up plan is being developed. Moving from laboratory scale to manufacturing scale [emphasis mine] introduces several challenges in the synthesis of material. This includes equipment selection, fluid and thermal dynamic effects at a larger scale, reaction kinetics, chemical equilibrium and of course, cost.

At what stage is this company?

The business

There are a couple of promising business developments. First, there’s a September 1, 2021 Hydrogen in Motion news release (Note: Links have been removed),

Loop Energy (TSX: LPEN), a developer and manufacturer of hydrogen fuel cell-based solutions, and Hydrogen In Motion (H2M), a leading provider of solid state hydrogen storage, announce their plans to collaborate on converting  a Southern Railway of BC owned and operated diesel electric switcher locomotive to hydrogen electric.

The two British Columbia-based companies will use locally developed technology, including Loop Energy’s 50kw eFlow™ fuel cell system and a low pressure solid state hydrogen storage tank developed by H2M. The project signifies the first instance of Loop supplying its products for use in a rail transport application.

“This is an exciting phase for the hydrogen fuel cell industry as this proves that it is technically and economically feasible to convert diesel-powered switcher locomotives to hydrogen fuel cell-based power systems,” said Grace Quan, CEO of Hydrogen-in-Motion. “The introduction of a hydrogen infrastructure into railyards reduces air contaminants and greenhouse gases and brings clean technologies, job growth and innovation to local communities.”

A few months before, a July 30, 2021 Hydrogen in Motion news release announced an international deal,

Hydrogen In Motion (H2M) announced a collaboration with H2e Power [h2e Power Systems] out of Pune, India for a project to assess, design, install and demonstrate a hydrogen fuel cell 3-Wheeler using H2e PEM Fuel Cell integrated with Hydrogen In Motion’s innovative solid state hydrogen storage technology onboard. This Indo-Canadian collaboration leverages the zero emission and hydrogen strategies released in India and Canada. Hydrogen In Motion is receiving advisory services and up to $600,000 in funding support for this project through the Canadian International Innovation Program (CIIP). CIIP is a funding program offered by Global Affairs Canada [emphasis mine] and is delivered in collaboration with the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP). Respectively in India, H2e’s contributions towards this collaboration are supported by the Department of Science & Technology (DST) in collaboration with Global Innovation and Technology Alliance (GITA).

About This Project – This project will install a hydrogen fuel cell range extender using H2M low pressure hydrogen storage tanks on an electric powered three-wheeled auto rickshaw. Project goal is to significantly extend operational range and provide auxiliary power for home use when not in service.

The lack of scientific papers about the company’s technology is a little concerning. It’s not unheard of but combined with not identifying the scientist/inventor who developed the technology or identifying the source for the technology (in Canada, it’s almost always a university), or giving details about the technology or giving product details or noting that their products are being beta tested (?) in two countries India and Canada, or information about funding (where do they get their money?), or having a scientific advisory board, raises questions. The answer may be simple. They don’t place much value on keeping their website up to date as they are busy.

I did find some company details on the Companies of Canada.com website,

Hydrogen In Motion Inc. (H2M) is a company from Vancouver BC Canada. The company has corporate status: Active.

This business was incorporated 8 years ago on 8th January 2014

Hydrogen In Motion Inc. (H2M) is governed under the Canada Business Corporations Act – 2014-01-08. It a company of type: Non-distributing corporation with 50 or fewer shareholders.

The date of the company’s last Annual Meeting is 2021-01-01. The status of its annual filings are: 2021 -Filed, 2020 -Filed, 2019 -Filed.

Kona Equity offers an analysis (from the second quarter of 2019 to the fourth quarter of 2020),

Hydrogen In Motion

Founded in 2014

Strengths

There are no known strengths for Hydrogen In Motion

Weaknesses

Hydrogen In Motion has a very small market share in their industry

Revenue generated per employee is less than the industry average

Revenue growth is less than the industry average

The number of employees is not growing as fast as the industry average

Variance of revenue growth is more than the industry average

7 employees

Employee growth rate from first known quarter to current -69.6%

I’d love to see a more recent analysis taking into account the 2021 business deals.

It’s impossible to tell when this job was posted but it provides some interesting insight, All the emphases are mine,

We are looking for an accomplished Chemical Process Engineer to lead our nanomaterial and carbon-rich material production, development and scale-up efforts. The holder of this position will be responsible for leading a team of engineers and technicians in the designing, developing and optimizing of process unit operations to provide high quality nanomaterials at various scales ranging from Research and Development to Commercial Manufacturing with good manufacturing practices (cGMP). The successful candidate is expected to independently strategize, analyze, design and control product scale-up to meet volume and quality demands.

Finally, there’s a chemical engineer or two. Plus, according to the company’s LinkedIn profile, there’s a theoretical physicist, Andrey Tokarev. Two locations are listed for Hydrogen in Motion, the Cordova St. office and something at 12388 88 Ave, Surrey. The company size is listed at 11 to 50 employees.

Grace Quan is good at getting government support for her company as this February 2019 story on the Government of Canada website shows,

Mark Cannon, Hydrogen in Motion CTO, Quak Foo Lee, chemical engineer, Angus Hui, co-op student, Dr. Pei Pei, research associate, Grace Quan, CEO, Sahida Kureshi PhD Candidate, and Dr. Andrey Tokerav, theoretical physicist. [downloaded from https://www.international.gc.ca/world-monde/stories-histoires/2019/CPTPP-hydrogen.aspx?lang=eng]

Canada in Asia-Pacific

Trade diversification | February 2019

Grace Quan’s goal is to deliver hydrogen around the world to help the environment and address climate change.

Quan is the CEO of Vancouver-based Hydrogen in Motion, a clean-tech company leading the way in hydrogen storage.

The number one problem with hydrogen is how to store it, which is why Quan founded Hydrogen in Motion. She set out to find a way to get hydrogen to people around the world.

Quan’s company has figured out how to do this. By using a material that soaks up hydrogen like a sponge, more of it can be stored at a lower pressure and at lower cost.

In the future, clean energy, including hydrogen, should become the method of choice to power anything that requires gas or electricity. For example, vehicles, snow blowers and drones could be powered by hydrogen in the future. Hydrogen is an infinite source of clean energy that can lessen the environmental impact from other sources of energy.

Thanks to the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), Quan says she can explore new markets in the Asia-Pacific region for hydrogen export.

Japan is a new market that Quan’s company will explore as a result of the CPTPP. There’s a lot of opportunity there, with Tokyo hosting the 2020 Olympics, which are expected to be powered by hydrogen.

Quan recently returned from a trade mission to India [emphasis mine], where local trade commissioners helped her set up a meeting with a major auto maker.

In 2020, Hydrogen in Motion was a ‘success story‘ for Canada’s Scientific Research and Experimental Development (SR&ED) Tax Incentive Program (Note: A link has been removed),

H2M was selected for the free in-person First-time claimant advisory service when filing its first scientific research and experimental development (SR&ED) claim. Since then, the SR&ED tax incentives have had a significant impact on the company’s work. The company is not only thankful for the program’s funding, but also to the SR&ED staff for their hard work and assistance, especially during the pandemic.

The company’s Chief Executive Officer, Grace Quan, had the following comments:

“In the context of COVID-19 shutdowns and general business disruption, the SR&ED tax incentives have become a critical source of funds as other sources were put on hold due to the pandemic and the financial uncertainty of the times. I wish to express my extreme gratitude for the consideration, efforts and support, as well as thanks, to the Canadian government, the SR&ED Program and its staff for their compassionate and empathetic treatment of individuals and businesses. The staff was friendly, professional, prompt and went above and beyond to help a small business like Hydrogen In Motion. They were a pleasure to work with and were extremely effective in problem resolution and facilitating processing of our SR&ED refund to provide much needed cash flow during these difficult times.”

As you might expect from someone running for political office, Quan is good at promoting herself. From her Advisory Board profile page for the Vancouver Economic Commission,

As President & CEO of Hydrogen In Motion Inc. (H2M), Grace brings fiduciary accountability and strategic vision to the table with her CPA/CMA [certified management accountant] and MBA credentials. Grace has a vast range of financial and managerial experience in private and public sectors from managing a Flying School, to working in a Provincial Minister’s office, to helping to manage the $250 billion dollar budget for the Treasury Board Secretariat of the Government of Canada. 

In 2018 Grace Quan, CEO was recognized by BC Business magazine as one of the 50 Most Influential Women In STEM. [emphasis mine]

July 28, 2021 it was announced that Quan became a member of the World Hydrogen Advisory Board of the Sustainable Energy Council (UK).

Speculating about a political candidate

Grace Quan’s electoral run seems like odd timing. If your company just signed two deals less than a year ago during what seems to be an upswing in its business affairs then running for office (an almost full time job in itself) as a city councillor (a full time job, should you be elected) is an unexpected move from someone with no experience in public office.

Another surprising thing? The British Columbia Centre for Innovation and Clean Energy (CICE) announced a new consortium according to a Techcouver.com June 9, 2022 news item (about four days before the announcement of Quan’s political candidacy on the Daily Hive),

The British Columbia Centre for Innovation and Clean Energy (CICE) is partnering with businesses and government organizations to drive B.C.’s low-carbon hydrogen economy forward, with the launch of the B.C. Hydrogen Changemakers Consortium (BCHCC).

The partnership was announced at last night’s official Consortium launch event hosted by CICE and attended by leading B.C. hydrogen players, investors, and government officials. The Consortium launch is part of CICE’s previously announced Hydrogen Blueprint Investment, which will lay a foundation for the establishment of a hydrogen hub in Metro Vancouver, co-locating hydrogen supply and demand.

The group is expected to grow as projects and collaborations increase. To date, the Consortium members include: Ballard Power Systems, Capilano Maritime Design Ltd., Climate Action Secretariat, Fort Capital, FortisBC, Geazone Eco-Courier, Hydra Energy, HTEC, Innovative Clean Energy Fund, InBC Investment Corp., Modo, Parkland Refining, Powertech Labs, and TransLink.

Hydrogen in Motion doesn’t seem to be one of the inaugural members, which may mean nothing or may hint at why Quan is running for office.

Three possibilities

Perhaps the company is not doing so well? There’s a very high failure rate with technology companies. The ‘valley of death’ is the description for taking a development from the lab and turning it into a business (which is almost always highly dependent on government funding). Assuming the company manages to get something to market and finds customers, the next stage, growing the company from a few million in revenues to 10s and 100s of millions of dollars is equally fraught.

Keeping the company afloat for eight years is a big accomplishment especially when you factor in COVID-19 which has had a devastating impact on businesses large and small.

Alternatively, the company is being acquired (or would that be absorbed?) by a larger company. Entrepreneurs in British Columbia have a long history of growing their tech companies with the goal of being acquired and getting a large payout. Quan’s co-founder certainly has experience with growing a company and then selling it to a larger company.

Finally, the company is doing just fine but Quan is bored and needs a new challenge (which may be the case in the other two scenarios as well). if you look at her candidate profile page, you’ll see she has a range of interests.

Note: I am not offering an opinion on Quan’s suitability for political office. This is neither an endorsement nor an ‘anti-endorsement’.

June 7 – 10, 2022 in Grenoble, France, a conference and a 6G summit to explore pathways to 6G, ‘Internet of Senses’, etc.

As far as I can tell, 5G is still not widely deployed. At least, that’s what I gather from Tim Fisher’s article profiling the deployment by continent and by country (reviewed by Christine Baker; updated on June 2, 2022) on the Lifewire website, Note: Links have been removed)

5G is the newest wireless networking technology for phones, smartwatches, cars, and who knows what else, but it’s not yet available in every region around the world.

Some estimates forecast that by 2025, we’ll reach 3.6 billion 5G connections, a number expected to grow to 4.4 billion by 2027.

I skimmed through Fisher’s article and the African continent would seem to have the most extensive deployment country by country.

Despite the fact that we’re years from a ubiquitous 5G environment, enthusiasts are preparing for 6G. A June 1, 2022 news item on Nanotechnology Now highlights an upcoming conference and 6G summit in Grenoble, France,

Anticipating that 6G systems will offer a major step change in performance from gigabit towards terabit capacities and sub-millisecond response times, the top two European conferences for communication networks will meet June 7-10 [2022] to explore future critical 6G applications like real-time automation or extended reality, an “internet of senses”, sustainability and providing data for a digital twin of the physical world.

The hybrid conference, “Connectivity for a Sustainable World”, will accommodate both in-person and remote attendance for four days of keynotes, panels, work sessions and exhibits. The event is sponsored by the IEEE Communications Society and the EU Association for Signal Processing and will be held in the WTC Grenoble Convention Center.

“The telecom sector is an enabler for a sustainable world,” said Emilio Calvanese Strinati, New-6G Program director at CEA-Leti, which organized the conference. “Designed to be energy efficient, with low carbon footprints, telecoms will be a key enabler to reduce CO2 emissions in the ICT sectors. For example, 6G targets multi-sensorial virtual reality, e.g. the metaverse, and remote work and telepresence, which enable people to interact without travelling.”

The conference also will explore new smart network technologies and architectures needed to dramatically enhance the energy efficiency and sustainability of networks to manage major traffic growth, while keeping electromagnetic fields under strict safety limits. These technologies will form the basis for a human-centric Next-Generation Internet and address the European Commission’s Sustainable Development Goals, such as accessibility and affordability of technology.
The Grenoble gathering is the 31st edition of the EuCNC [EU-China Commission] conference, which merged two years ago with the 6G Summit. The joint conference was established by the European Commission for industry, academia, research centers and SMEs from across the ICT and telecom sectors to cooperate, discuss and help realize the vision for European technological sovereignty. It is intended to be held for in-person attendance, with remote attendance in a hybrid mode.

“The EuCNC and 6G Summit members are playing an important role in supporting the EU’s goal of European Sovereignty and cybersecurity in 5G and 6G in parallel with the French microelectronics industry’s support of the European Chips Act,” said Calvanese Strinati, who will help lead a workshop, “Semantic and Goal Oriented Communications, an Opportunity for 6G?”, on June 7.

Keynotes (all times CEST) [Central European Summer Time]

“Shaping 6G: Revolutionizing the Evolution of Networks”
Mikael Rylander, Technology Leadership Officer, Nokia/Netherlands
June 8: 9:15-10:00 am

“6G: From Digital Transformation to Socio-Digital Innovation”
Dimitra Simeonidou, Director Smart Internet Lab, Co-Director Bristol Digital Futures Institute, University of Bristol, UK
June 9: 8:30-9:15 am

“Going Beyond RF: Nano Communication in 6G+ Networks”
Falko Dressler, Professor, Technische Universität, Berlin
June 9: 9:15-10:00 am

For the curious, CEA-Leti, the organizing institution, is “a research institute for electronics and information technologies, based in Grenoble, France. It is one of the world’s largest organizations for applied research in microelectronics and nanotechnology.” (See the entire description in the CEA-Leti: Laboratoire d’l’électronique des technologies de l’information Wikipedia entry)

As for the ‘internet of senses’, perhaps I missed seeing it in the programme?

The co-chairs Pearse O’Donohue and Sébastien Dauvé offer a welcome on the 2022 conference/summit homepage that touches on current affairs, as well as, the technology,

We would like to welcome you to this edition of the conference, which is for the second time putting together two of the top European conferences in the area of communication networks: the European Conference on Networks and Communications (EuCNC) and the 6G Summit. After two years of restrictions due to the COVID-19 pandemic, we are delighted to host this hybrid conference in the city of Grenoble, located in the French Alps and recognised internationally for its scientific excellence, especially in the area of electronics components and systems. This is a testimony of the increased importance of microelectronics for European technological sovereignty and cybersecurity in 5G and 6G, in line with the European Chips Act recently proposed by the Commission.

The Russian war against Ukraine has disrupted the lives of millions of Ukrainians. Recognising the importance of connectivity, in particular in times of crisis and under these exceptional circumstances, the EU in cooperation with key stakeholders has taken measures to alleviate the consequences of the humanitarian crisis. These include resilience of networks within the country, free or heavily discounted international calls and SMS to Ukraine or free roaming to Ukrainian people that fled the war.

In the longer term, we need to make sure that trust, security and competitiveness of future technologies such as beyond 5G and 6G are ensured.

6G systems are expected to offer a new step change in performance from Gigabit towards Terabit capacities and sub-millisecond response times. This will enable new critical applications such as real-time automation or extended reality (“Internet of Senses”) sensing, collecting and providing the data for nothing less than a digital twin of the physical world.

Moreover, new smart network technologies and architectures will need to drastically enhance the energy efficiency of connectivity infrastructures to manage major traffic growth while keeping electromagnetic fields under strict safety limits. These technologies will form the basis for a human-centric Next-Generation Internet and address Sustainable Development Goals (SDGs) such as accessibility and affordability of technology.

This year is an important milestone in the European research, development and innovation sphere towards 6G communications systems as it has seen the kick-off of the activities of the European partnership on Smart Networks and Services (SNS). This strategic public-private partnership has been established in November 2021 as one of the Horizon Europe Joint undertakings. The SNS partnership should enable European players to develop the technology capacities for 6G systems as basis for future digital services towards 2030. Its focus extends beyond networking, spanning the whole value chain, from components and devices to the Cloud, AI and Cybersecurity.

In January 2022, the first SNS JU [Joint Undertaking] calls for proposals has been launched, with a total budget of EUR 240 million. It sets out main complementary work streams spanning from 5G Evolution systems, research for radical technology advancement in preparation for 6G, proof of concepts including experimental infrastructures; up to large scale trials and pilots with vertical industries. We are excited and cannot wait for the selected projects to be launched next autumn, thus joining the big family of the EU projects that you will be able to discover and liaise with during this conference.

Karl Bode’s June 2, 2022 article, “6G Hype Begins Despite Fact 5G Hasn’t Finished Disappointing Us Yet,” on Techdirt offers a more measured response to the 6G hopes and dreams offered by O’Donohue, Dauvé, and the others hyping the next technology that will solve all kinds of problems.

Sci-fi opera: R.U.R. A Torrent of Light opens May 28, 2022 in Toronto, Canada

Even though it’s a little late, I guess you could call the opera opening in Toronto on May 28, 2022 a 100th anniversary celebration of the word ‘robot’. Introduced in 1920 by Czech playwright Karel Čapek in his play, R.U.R., which stands for ‘Rossumovi Univerzální Roboti’ or, in English, ‘Rossum’s Universal Robots’, the word was first coined by Čapek’s brother, Josef (see more about the play and the word in the R.U.R. Wikipedia entry).

The opera, R.U.R. A Torrent of Light, is scheduled to open at 8 pm ET on Saturday, May 28, 2022 (after being rescheduled due to a COVID case in the cast) at OCAD University’s (formerly the Ontario College of Art and Design) The Great Hall.

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.

As for the opera’s story,

The fictional tech company R.U.R., founded by couple Helena and Dom, dominates the A.I. software market and powers the now-ubiquitous androids that serve their human owners. 

As Dom becomes more focused on growing R.U.R’s profits, Helena’s creative research leads to an unexpected technological breakthrough that pits the couples’ visions squarely against each other. They’ve reached a turning point for humanity, but is humanity ready? 

Inspired by Karel Čapek’s 1920’s science-fiction play Rossum’s Universal Robots (which introduced the word “robot” to the English language), composer Nicole Lizée’s and writer Nicolas Billon’s R.U.R. A Torrent of Light grapples with one of our generation’s most fascinating questions. [emphasis mine]

So, what is the fascinating question? The answer is here in a March 7, 2022 OCAD news release,

Last Wednesday [March 2, 2022], OCAD U’s Great Hall at 100 McCaul St. was filled with all manner of sound making objects. Drum kits, gongs, chimes, typewriters and most exceptionally, a cello bow that produces bird sounds when glided across any surface were being played while musicians, dancers and opera singers moved among them.  

All were abuzz preparing for Tapestry Opera’s new production, R.U.R. A Torrent of Light, which will be presented this spring in collaboration with OCAD University. 

An immersive, site-specific experience, the new chamber opera explores humanity’s relationship to technology. [emphasis mine] Inspired by Karel Čapek’s 1920s science-fiction play Rossum’s Universal Robots, this latest version is set 20 years in the future when artificial intelligence (AI) has become fully sewn into our everyday lives and is set in the offices of a fictional tech company.

Čapek’s original script brought the word robot into the English language and begins in a factory that manufactures artificial people. Eventually these entities revolt and render humanity extinct.  

The innovative adaptation will be a unique addition to Tapestry Opera’s more than 40-year history of producing operatic stage performances. It is the only company in the country dedicated solely to the creation and performance of original Canadian opera. 

The March 7, 2022 OCAD news release goes on to describe the Social Body Lab’s involvement,

OCAD U’s Social Body Lab, whose mandate is to question the relationship between humans and technology, is helping to bring Tapestry’s vision of the not-so-distant future to the stage. Director of the Lab and Associate Professor in the Faculty of Arts & Science, Kate Hartman, along with Digital Futures Associate Professors Nick Puckett and Dr. Adam Tindale have developed wearable technology prototypes that will be integrated into the performers’ costumes. They have collaborated closely with the opera’s creative team to embrace the possibilities innovative technologies can bring to live performance. 

“This collaboration with Tapestry Opera has been incredibly unique and productive. Working in dialogue with their designers has enabled us to translate their ideas into cutting edge technological objects that we would have never arrived at individually,” notes Professor Puckett. 

The uncanny bow that was being tested last week is one of the futuristic devices that will be featured in the performance and is the invention of Dr. Tindale, who is himself a classically trained musician. He has also developed a set of wearable speakers for R.U.R. A Torrent of Light that when donned by the dancers will allow sound to travel across the stage in step with their choreography. 

Hartman and Puckett, along with the production’s costume, light and sound designers, have developed an LED-based prototype that will be worn around the necks of the actors who play robots and will be activated using WIFI. These collar pieces will function as visual indicators to the audience of various plot points, including the moments when the robots receive software updates.  

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design,” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

“New music and theatre are perfect canvases for iterative experimentation. We look forward to the unique fruits of this collaboration and future ones,” he continues. 

Unfortunately, I cannot find a preview but there is this video highlighting the technology being used in the opera (there are three other videos highlighting the choreography, the music, and the story, respectively, if you scroll about 40% down this page),


As I promised, here are the logistics,

University address:

OCAD University
100 McCaul Street,
Toronto, Ontario, Canada, M5T 1W1

Performance venue:

The Great Hall at OCAD University
Level 2, beside the Anniversary Gallery

Ticket prices:

The following seating sections are available for this performance. Tickets are from $10 to $100. All tickets are subject to a $5 transaction fee.

Orchestra Centre
Orchestra Sides
Orchestra Rear
Balcony (standing room)

Performances:

May 28 at 8:00 pm

May 29 at 4:00 pm

June 01 at 8:00 pm

June 02 at 8:00 pm

June 03 at 8:00 pm

June 04 at 8:00 pm

June 05 at 4:00 pm

Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage offers a link to buy tickets but it lands on a page that doesn’t seem to be functioning properly. I have contacted (as of Tuesday, May 24, 2022 at about 10:30 am PT) the Tapestry Opera folks to let them know about the problem. Hopefully soon, I will be able to update this page when they’ve handled the issue.

ETA May 30, 2022: You can buy tickets here. There are tickets available for only two of the performances left, Thursday, June 2, 2022 at 8 pm and Sunday, June 5, 2022 at 4 pm.

Canada’s science and its 2022 federal budget (+ the online April 21, 2022 symposium: Decoding Budget 2022 for Science and Innovation)

Here’s my more or less annual commentary on the newly announced federal budget. This year the 2022/23 Canadian federal budget was presented by Chrystia Freeland, Minister of Finance, on April 7, 2022.

Sadly the budgets never include a section devoted to science and technology, which makes finding the information a hunting exercise.

I found most of my quarry in the 2022 budget’s Chapter 2: A Strong, Growing, and Resilient Economy (Note: I’m picking and choosing items that interest me),

Key Ongoing Actions

  • $8 billion to transform and decarbonize industry and invest in clean technologies and batteries;
  • $4 billion for the Canada Digital Adoption Program, which launched in March 2022 to help businesses move online, boost their e-commerce presence, and digitalize their businesses;
  • $1.2 billion to support life sciences and bio-manufacturing in Canada, including investments in clinical trials, bio-medical research, and research infrastructure;
  • $1 billion to the Strategic Innovation Fund to support life sciences and bio-manufacturing firms in Canada and develop more resilient supply chains. This builds on investments made throughout the pandemic with manufacturers of vaccines and therapeutics like Sanofi, Medicago, and Moderna;
  • $1 billion for the Universal Broadband Fund (UBF), bringing the total available through the UBF to $2.75 billion, to improve high-speed Internet access and support economic development in rural and remote areas of Canada;
  • $1.2 billion to launch the National Quantum Strategy, Pan-Canadian Genomics Strategy, and the next phase of Canada’s Pan-Canadian Artificial Intelligence Strategy to capitalize on emerging technologies of the future [Please see: the ‘I am confused’ subhead for more about the ‘launches’];
  • Helping small and medium-sized businesses to invest in new technologies and capital projects by allowing for the immediate expensing of up to $1.5 million of eligible investments beginning in 2021;

While there are proposed investments in digital adoption and the Universal Broadband Fund, there’s no mention of 5G but perhaps that’s too granular (or specific) for a national budget. I wonder if we’re catching up yet? There have been concerns about our failure to keep pace with telecommunications developments and infrastructure internationally.

Moving on from ‘Key Ongoing Actions’, there are these propositions from Chapter 2: A Strong, Growing, and Resilient Economy (Note: I have not offset the material from the budget in a ‘quote’ form as I want to retain the formatting.),

Creating a Canadian Innovation and Investment Agency

Canadians are a talented, creative, and inventive people. Our country has never been short on good ideas.

But to grow our economy, invention is not enough. Canadians and Canadian companies need to take their new ideas and new technologies and turn them into new products, services, and growing businesses.

However, Canada currently ranks last in the G7 in R&D spending by businesses. This trend has to change. [Note: We’ve been lagging from at least 10 or more years and we keep talking about catching up.]

Solving Canada’s main innovation challenges—a low rate of private business investment in research, development, and the uptake of new technologies—is key to growing our economy and creating good jobs.

A market-oriented innovation and investment agency—one with private sector leadership and expertise—has helped countries like Finland and Israel transform themselves into global innovation leaders. {Note: The 2021 budget also name checked Israel.]

The Israel Innovation Authority has spurred the growth of R&D-intensive sectors, like the information and communications technology and autonomous vehicle sectors. The Finnish TEKES [Tekes – The Finnish Funding Agency for Technology and Innovation] helped transform low-technology sectors like forestry and mining into high technology, prosperous, and globally competitive industries.

In Canada, a new innovation and investment agency will proactively work with new and established Canadian industries and businesses to help them make the investments they need to innovate, grow, create jobs, and be competitive in the changing global economy.

Budget 2022 announces the government’s intention to create an operationally independent federal innovation and investment agency, and proposes $1 billion over five years, starting in 2022-23, to support its initial operations. Final details on the agency’s operating budget are to be determined following further consultation later this year.

Review of Tax Support to R&D and Intellectual Property

The Scientific Research and Experimental Development (SR&ED) program provides tax incentives to encourage Canadian businesses of all sizes and in all sectors to conduct R&D. The SR&ED program has been a cornerstone of Canada’s innovation strategy. The government intends to undertake a review of the program, first to ensure that it is effective in encouraging R&D that benefits Canada, and second to explore opportunities to modernize and simplify it. Specifically, the review will examine whether changes to eligibility criteria would be warranted to ensure adequacy of support and improve overall program efficiency. 

As part of this review, the government will also consider whether the tax system can play a role in encouraging the development and retention of intellectual property stemming from R&D conducted in Canada. In particular, the government will consider, and seek views on, the suitability of adopting a patent box regime [emphasis mine] in order to meet these objectives.

I am confused

Let’s start with the 2022 budget’s $1.2 billion to launch the National Quantum Strategy, Pan-Canadian Genomics Strategy, and the next phase of Canada’s Pan-Canadian Artificial Intelligence Strategy. Here’s what I had in my May 4, 2021 posting about the 2021 budget,

  • Budget 2021 proposes to provide $360 million over seven years, starting in 2021-22, to launch a National Quantum Strategy [emphasis mine]. The strategy will amplify Canada’s significant strength in quantum research; grow our quantum-ready technologies, companies, and talent; and solidify Canada’s global leadership in this area. This funding will also establish a secretariat at the Department of Innovation, Science and Economic Development to coordinate this work.
  • Budget 2021 proposes to provide $400 million over six years, starting in 2021-22, in support of a Pan-Canadian Genomics Strategy [emphasis mine]. This funding would provide $136.7 million over five years, starting in 2022-23, for mission-driven programming delivered by Genome Canada to kick-start the new Strategy and complement the government’s existing genomics research and innovation programming.
  • Budget 2021 proposes to provide up to $443.8 million over ten years, starting in 2021-22, in support of the Pan-Canadian Artificial Intelligence Strategy [emphasis mine], …

How many times can you ‘launch’ a strategy?

A patent box regime

So the government is “… encouraging the development and retention of intellectual property stemming from R&D conducted in Canada” and is examining a “patent box regime” with an eye as to how that will help achieve those ends. Interesting!

Here’s how the patent box is described on Wikipedia (Note: Links have been removed),

A patent box is a special very low corporate tax regime used by several countries to incentivise research and development by taxing patent revenues differently from other commercial revenues.[1] It is also known as intellectual property box regime, innovation box or IP box. Patent boxes have also been used as base erosion and profit shifting (BEPS) tools, to avoid corporate taxes.

Even if they can find a way to “incentivize” R&D, the government has a problem keeping research in the country (see my September 17, 2021 posting (about the Council of Academies CCA’s ‘Public Safety in the Digital Age’ project) and scroll down about 50% of the way to find this,

There appears to be at least one other major security breach; that involving Canada’s only level four laboratory, the Winnipeg-based National Microbiology Lab (NML). (See a June 10, 2021 article by Karen Pauls for Canadian Broadcasting Corporation news online for more details.)

As far as I’m aware, Ortis [very senior civilian RCMP intelligence official Cameron Ortis] is still being held with a trial date scheduled for September 2022 (see Catherine Tunney’s April 9, 2021 article for CBC news online) and, to date, there have been no charges laid in the Winnipeg lab case.

The “security breach” involved sending information and sample viruses to another country, without proper documentation or approvals.

While I delved into a particular aspect of public safety in my posting, the CCA’s ‘Public Safety in the Digital Age’ project was very loosely defined and no mention was made of intellectual property. (You can check the “Exactly how did the question get framed?” subheading in the September 17, 2021 posting.)

Research security

While it might be described as ‘shutting the barn door after the horse got out’, there is provision in the 2022 budget for security vis-à-vis our research, from Chapter 2: A Strong, Growing, and Resilient Economy,

Securing Canada’s Research from Foreign Threats

Canadian research and intellectual property can be an attractive target for foreign intelligence agencies looking to advance their own economic, military, or strategic interests. The National Security Guidelines for Research Partnerships, developed in collaboration with the Government of Canada– Universities Working Group in July 2021, help to protect federally funded research.

  • To implement these guidelines fully, Budget 2022 proposes to provide $159.6 million, starting in 2022-23, and $33.4 million ongoing, as follows:
    • $125 million over five years, starting in 2022-23, and $25 million ongoing, for the Research Support Fund to build capacity within post- secondary institutions to identify, assess, and mitigate potential risks to research security; and
    • $34.6 million over five years, starting in 2022-23, and $8.4 million ongoing, to enhance Canada’s ability to protect our research, and to establish a Research Security Centre that will provide advice and guidance directly to research institutions.

Mining

There’s a reason I’m mentioning the mining industry, from Chapter 2: A Strong, Growing, and Resilient Economy,

Canada’s Critical Minerals and Clean Industrial Strategies

Critical minerals are central to major global industries like clean technology, health care, aerospace, and computing. They are used in phones, computers, and in our cars. [emphases mine] They are already essential to the global economy and will continue to be in even greater demand in the years to come.

Canada has an abundance of a number of valuable critical minerals, but we need to make significant investments to make the most of these resources.

In Budget 2022, the federal government intends to make significant investments that would focus on priority critical mineral deposits, while working closely with affected Indigenous groups and through established regulatory processes. These investments will contribute to the development of a domestic zero-emissions vehicle value chain, including batteries, permanent magnets, and other electric vehicle components. They will also secure Canada’s place in important supply chains with our allies and implement a just and sustainable Critical Minerals Strategy.

In total, Budget 2022 proposes to provide up to $3.8 billion in support over eight years, on a cash basis, starting in 2022-23, to implement Canada’s first Critical Minerals Strategy. This will create thousands of good jobs, grow our economy, and make Canada a vital part of the growing global critical minerals industry.

I don’t recall seeing mining being singled out before and I’m glad to see it now.

A 2022 federal budget commentary from University Affairs

Hannah Liddle’s April 8, 2022 article for University Affairs is focused largely on the budget’s impact on scientific research and she picked up on a few things I missed,

Budget 2022 largely focuses on housing affordability, clean growth and defence, with few targeted investments in scientific research.

The government tabled $1 billion over five years for an innovation and investment agency, designed to boost private sector investments in research and development, and to correct the slow uptake of new technologies across Canadian industries. The new agency represents a “huge evolution” in federal thinking about innovation, according to Higher Education Strategy Associates. The company noted in a budget commentary that Ottawa has shifted to solving the problem of low spending on research and development by working with the private sector, rather than funding universities as an alternative. The budget also indicated that the innovation and investment agency will support the defence sector and boost defence manufacturing, but the promised Canada Advanced Research Projects Agency – which was to be modelled after the famed American DARPA program – was conspicuously missing from the budget. [emphases mine]

However, the superclusters were mentioned and have been rebranded [emphasis mine] and given a funding boost. The five networks are now called “global innovation clusters,” [emphasis mine] and will receive $750 million over six years, which is half of what they had reportedly asked for. Many universities and research institutions are members of the five clusters, which are meant to bring together government, academia, and industry to create new companies, jobs, intellectual property, and boost economic growth.

Other notable innovation-related investments include the launch of a critical minerals strategy, which will give the country’s mining sector $3.8 billion over eight years. The strategy will support the development of a domestic zero-emission vehicle value chain, including for batteries (which are produced using critical minerals). The National Research Council will receive funding through the strategy, shared with Natural Resources Canada, to support new technologies and bolster supply chains of critical minerals such as lithium and cobalt. The government has also targeted investments in the semiconductor industry ($45 million over four years), the CAN Health Network ($40 million over four years), and the Canadian High Arctic Research Station ($14.5 million over five years).

Canada’s higher education institutions did notch a win with a major investment in agriculture research. The government will provide $100 million over six years to support postsecondary research in developing new agricultural technologies and crop varieties, which could push forward net-zero emissions agriculture.

The Canada Excellence Research Chairs program received $38.3 million in funding over four years beginning in 2023-24, with the government stating this could create 12 to 25 new chair positions.

To support Canadian cybersecurity, which is a key priority under the government’s $8 billion defence umbrella, the budget gives $17.7 million over five years and $5.5 million thereafter until 2031-32 for a “unique research chair program to fund academics to conduct research on cutting-edge technologies” relevant to the Communications Security Establishment – the national cryptologic agency. The inaugural chairs will split their time between peer-reviewed and classified research.

The federal granting councils will be given $40.9 million over five years beginning in 2022-23, and $9.7 million ongoing, to support Black “student researchers,” who are among the underrepresented groups in the awarding of scholarships, grants and fellowships. Additionally, the federal government will give $1.5 million to the Jean Augustine Chair in Education, Community and Diaspora, housed at York University, to address systemic barriers and racial inequalities in the Canadian education system and to improve outcomes for Black students.

A pretty comprehensive listing of all the science-related funding in the 2022 budget can be found in an April 7, 2022 posting on the Evidence for Democracy (E4D) blog,

2022 budget symposium

Here’s more about the symposium from the Canadian Science Policy Centre (CSPC), from the Decoding Budget 2022 event page,

Decoding Budget 2022 for Science and Innovation

The CSPC Budget Symposium will be held on Thursday April 21 [2022] at 12:00 pm (EST), and feature numerous speakers from across the country and across different sectors, in two sessions and one keynote presentation by Dave Watters titled: “Decoding Budget 2022 for Science and Innovation”.

Don’t miss this session and all insightful discussions of the Federal Budget 2022.

Register Here

You can see the 2022 symposium poster below,

By the way, David Watters gave the keynote address for the 2021 symposium too. Seeing his name twice now aroused my curiosity. Here’s a little more about David Watters (from a 2013 bio on the Council of Canadian Academies website), Note: He is still president,

David Watters is President of the Global Advantage Consulting Group, a strategic management consulting firm that provides advice to corporate, association, and government clients in Canada and abroad.

Mr. Watters worked for over 30 years in the federal public service in a variety of departments, including Energy Mines and Resources, Consumer and Corporate Affairs, Industry Canada (as Assistant Deputy Minister), Treasury Board Secretariat (in charge of Crown corporations and privatization issues), the Canadian Coast Guard (as its Commissioner) and Finance Canada (as Assistant Deputy Minister for Economic Development and Corporate Finance). He then moved to the Public Policy Forum where he worked on projects dealing with the innovation agenda, particularly in areas such as innovation policy, health reform, transportation, and the telecommunications and information technology sectors. He also developed reports on the impact of the Enron scandal and other corporate and public sector governance problems for Canadian regulators.

Since starting the Global Advantage Consulting Group in 2002, Mr. Watters has assisted a variety of public and private clients. His areas of specialization and talent are in creating visual models for policy development and decision making, and business models for managing research and technology networks. He has also been an adjunct professor at the Telfer School of Management at the University of Ottawa, teaching International Negotiation.

Mr. Watters holds a Bachelor’s degree in Economics from Queen’s University as well as a Law degree in corporate, commercial and tax law from the Faculty of Law at Queen’s University.

So, an economist, lawyer, and government bureaucrat is going to analyze the budget with regard to science and R&D? If I had to guess, I’d say he’s going to focus in ‘innovation’ which I’m decoding as a synonym for ‘business/commercialization’.

Getting back to the budget, it’s pretty medium where science is concerned with more than one -re-announcement’. As the pundits have noted, the focus is on deficit reduction and propping up the economy.

ETA April 20, 2022: There’s been a keynote speaker change, from an April 20, 2022 CSPC announcement (received via email),

… keynote presentation by Omer Kaya, CEO of Global Advantage Consulting Group. Unfortunately, due to unexpected circumstances, Dave Watters will not be presenting at this session as expected before.

Singapore contributes to art/science gallery on the International Space Station (ISS)

A March 15, 2022 Nanyang Technological University press release (also on EurekAlert) announces Singapore’s contribution to an art gallery in space,

Two Singapore-designed artefacts are now orbiting around the Earth on the International Space Station (ISS), as part of Moon Gallery.

These artworks were successfully launched into space recently as part of a test flight by the Moon Gallery and will come back to Earth after 10 months.

Currently consisting of 64 artworks made by artists all around the world, the Moon gallery will eventually consist of 100 artworks, which will then be placed on the moon by 2025. Out of these 64 art pieces on the ISS, only two are Singaporean artworks.

Here’s Singapore’s contribution,

Caption: NTU [Nanyang Technological University] Singapore Assistant Professor Matteo Seita (left), who is holding the Cube of Interaction, and Ms Lakshmi Mohanbabu (right), who designed both cubes. The Structure & Reflectance cube in the foreground was 3D printed at NTU Singapore.. Credit: NTU Singapore

A December 8, 2021 news item on phys.org describes the project,

The Moon Gallery Foundation is developing an art gallery to be sent to the Moon, contributing to the establishment of the first lunar outpost and permanent museum on Earth’s only natural satellite. The international initiative will see one hundred artworks from artists around the world integrated into a 10 cm x 10 cm x 1 cm grid tray, which will fly to the Moon by 2025. The Moon Gallery aims to expand humanity’s cultural dialog beyond Earth. The gallery will meet the cosmos for the first time in low Earth orbit in 2022 in a test flight.

The test flight is in collaboration with Nanoracks, a private in-space service provider. The gallery is set to fly to the International Space Station (ISS) aboard the NG-17 rocket as part of a Northrop Grumman Cygnus resupply mission in February of 2022. The art projects featured in the gallery will reach the final frontier of human habitat in space, and mark the historical meeting point of the Moon Gallery and the cosmos. Reaching low Earth orbit on the way to the Moon is a pivotal first step in extending our cultural dialog to space.

On its return flight, the Moon Gallery will become a part of the NanoLab technical payload, a module for space research experiments. The character of the gallery will offer a diverse range of materials and behaviors for camera observations and performance tests with NanoLab.

In return, Moon Gallery artists will get a chance to learn about the performance of their artworks in space. The result of these observations will serve as a solid basis for the subsequent Moon Gallery missions and a source of a valuable learning experience for future space artists. The test flight to the ISS is a precursor mission, contributing to the understanding of future possibilities for art in space and strengthening collaboration between the art and space sectors.

A December 8, 2021 NYU press release on EurekAlert, which originated the news item, provides more detail about the art from Singapore,

STRUCTURE & REFLECTANCE CUBE

Our every perception, analysis, and thought reflect the influences from our surroundings and the Universe in a world of collaboration, communication and interaction, making it possible to explore the real, the imagined and the unknown. The ‘Structure and Reflectance’ cube, a marriage of Art and Technology, is one of the hundred artworks selected by the Moon Gallery, with a unifying message of an integrated world, making it a quintessential signature of humankind on the Moon.

Ms Lakshmi Mohanbabu, a Singaporean architect and designer, is the first and only local artist to have her artwork selected for the Moon Gallery. Coined the ‘Structure and Reflectance’ cube, Lakshmi’s art is a marriage of Art and Technology and is one of the hundred artworks selected by the Moon Gallery. The cube signifies a unifying message of an integrated world, making it a quintessential signature of humankind on the Moon.

The early-stage prototyping and design iterations of the ‘Structure and Reflectance’ cube were performed with Additive Manufacturing, otherwise known as 3D printing, at Nanyang Technological University, Singapore’s (NTU Singapore)Singapore Centre for 3D Printing (SC3DP). This was part of a collaborative project supported by the National Additive Manufacturing Innovation Cluster (NAMIC), a national programme office which accelerates the adoption and commercialisation of additive manufacturing technologies. Previously, the NTU Singapore team at SC3DP produced a few iterations of Moon-Cube using metal 3D printing in various materials such as Inconel and Stainless Steel to evaluate the best suited material.

The newest iteration of the cube comprises crystals—ingrained in the cube via additive manufacturing technology— revealed to the naked eye by the microscopic differences in their surface roughness, which reflect light along different directions.

“Additive Manufacturing is suitable for enabling this level of control over the crystal structure of solids. More specifically, the work was created using ‘laser powder bed fusion technology’ a metal additive manufacturing process which allows us to control the surface roughness through varying the laser parameter,” said Dr Matteo Seita, Nanyang Assistant Professor, NTU Singapore, is the Principal Investigator overseeing the project for the current cube design.  

Dr Seita shared the meaning behind the materials used, “Like people, materials have a complex ‘structure’ resulting from their history—the sequence of processes that have shaped their constituent parts—which underpins their differences. Masked by an exterior façade, this structure often reveals little of the underlying quality in materials or people. The cube is a material representation of a human’s complex structure embodied in a block of metal consisting of two crystals with distinct reflectivity and complementary shape.”

Ms Lakshmi added, “The optical contrast on the cube surface from the crystals generates an intricate geometry which signifies the duality of man: the complexity of hidden thought and expressed emotion. This duality is reflected by the surface of the Moon where one side remains in plain sight, while the other has remained hidden to humankind for centuries; until space travel finally allowed humanity to gaze upon it. The bright portion of the visible side of the Moon is dependent on the Moon’s position relative to the Earth and the Sun. Thus, what we see is a function of our viewpoint.”

The hidden structure of materials, people, and the Moon are visualized as reflections of light through art and science in this cube. Expressed in the Structure & Reflectance cube is the concept of human’s duality—represented by two crystals with different reflectance—which appears to the observer as a function of their perspective.

Dr Ho Chaw Sing, Co-Founder and Managing Director of NAMIC said, “Space is humanity’s next frontier. Being the only Singaporean – among a selected few from the global community – Lakshmi’s 3D printed cube presents a unique perspective through the fusion of art and technology. We are proud to have played a small role supporting her in this ‘moon-shot’ initiative.”

Lakshmi views each artwork as a portrayal of humanity’s quests to discover the secrets of the Universe and—fused into a single cube—embody the unity of humankind, which transcends our differences in culture, religion, and social status.

The first cube face, the Primary, is divided into two triangles and depicts the two faces of the Moon, one visible to us from the earth and the other hidden from our view.

The second cube face, the Windmill, has two spiralling windmill forms, one clockwise and the other counter-clockwise, representing our existence, energy, and time.

The third cube face, the Dromenon, is a labyrinth form of nested squares, which represents the layers that we—as space explorers—are unravelling to discover the enigma of the Universe. 

The fourth cube face, the Nautilus, reflects the spiralling form of our DNA that makes each of us unique, a shape reflected in the form of our galaxy.

Not having heard of the Moon Gallery or the Moon Gallery Foundation, I did a little research. There’s a LinkedIn profile for the Moon Gallery Foundation (both the foundation and the gallery are located in Holland [Netherlands]),

Moon Gallery is where art and space meet. We aim to set up the first permanent museum on the Moon and develop a culture for future interplanetary society.

Moon Gallery will launch 100 artefacts to the Moon within the compact format of 10 x 10 x 1cm plate on a lunar lander exterior panelling no later than 2025. We suggest bringing this collection of ideas as the seeds of a new culture. We believe that culture makes a distinction between mere survival and life. Moon Gallery is a symbolic gesture that has a real influence – a way to reboot culture, rethink our values for better living on Earth planet.

The Moon Gallery has its own website, where I found more information about events, artists, and partners such as Nanoracks,

Nanoracks is dedicated to using our unique expertise to solve key problems both in space and on the Earth – all while lowering the barriers to entry of space exploration. Nanoracks’s main office is in Houston, Texas. The business development office is in Washington, D.C., and additional offices are located in Abu Dhabi, United Arab Emirates (UAE) and Turin, Italy. Nanoracks provides tools, hardware and services that allow other companies, organizations and governments to conduct research and other projects in space. Some of Nanoracks customers include Student Spaceflight Experiments Program (SSEP), the European Space Agency (ESA), the German Space Agency (DLR), NASA, Planet Labs, Space Florida, Virgin Galactic, Adidas, Aerospace Corporation, National Reconnaissance Office (NRO), UAE Space Agency, Mohammed bin Rashid Space Centre (MBRSC), and the Beijing Institute of Technology.

You can find the Nanoracks website here.