Category Archives: Music

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

it was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Art/Sci exhibit in Toronto, Canada: “These are a Few of Our Favourite Bees” June 22 – July 16, 2022

A “These are a few of Our Favourite Bees” upcoming exhibitions notice on the Campbell House Museum website (also received via email as a June 4, 2022 ArtSci Salon announcement) features a month long exhibit being co-presented with the Canadian Music Centre in Toronto,

Exhibition
Campbell House Museum
June 22 – July 16, 2022
160 Queen Street W.

Opening event
Campbell House,
Saturday July 2,
2 – 4 p.m. [ET]

Artists’ Talk & Webcast
The Canadian Music Centre,
20 St. Joseph Street Toronto
Thursday, July 7
7:30 – 9 p.m. [ET]
(doors open 7 pm)

These are a Few of Our Favourite Bees investigates wild, native bees and their ecology through playful dioramas, video, audio, relief print and poetry. Inspired by lambe lambe – South American miniature puppet stages for a single viewer – four distinct dioramas convey surreal yet enlightening worlds where bees lounge in cozy environs, animals watch educational films [emphasis mine] and ethereal sounds animate bowls of berries (having been pollinated by their diverse bee visitors). Displays reminiscent of natural history museums invite close inspection, revealing minutiae of these tiny, diverse animals, our native bees. From thumb-sized to extremely tiny, fuzzy to hairless, black, yellow, red or emerald green, each native bee tells a story while her actions create the fruits of pollination, reflecting the perpetual dance of animals, plants and planet. With a special appearance by Toronto’s official bee, the jewelled green sweat bee, Agapostemon virescens!

These are a Few of Our Favourite Bees Collective are: Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey

 The Works

These are a Few of Our Favourite Bees

Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey

Single-viewer box theatres, dioramas, sculpture, textile art, macro video, audio transducers, poetry, insect specimens, relief print, objects, electronics, colour-coded DNA barcodes.

Bees represented: rusty-patched bumble bee (Bombus affinis); jewelled green sweat bee (Agapostemon virescens); masked sweat bee (Hylaeus annulatus); leafcutter bee (Megachile relativa)

In the Landscape

Ele Willoughby & Sarah Peebles

paper, relief print, video projection, audio, audio cable, mixed media

Bee specimens & bee barcodes generously provided by Laurence Packer – Packer Lab, York University; Scott MacIvor – BUGS Lab, U-T [University of Toronto] Scarborough; Sam Droege – USGS [US Geological Survey]; Barcode of Life Data Systems; Antonia Guidotti, Department of Natural History, Royal Ontario Museum

In addition to watching television, animals have been known to interact with touchscreen computers as mentioned in my June 24, 2016 posting, “Animal technology: a touchscreen for your dog, sonar lunch orders for dolphins, and more.”

The “These are a few of Our Favourite Bees” upcoming exhibitions notice features this artist statement for a third piece, “Without A Bee, It Would Not Be” by Tracey Lawko,

In May, my crabapple tree blooms. In August, I pick the ripe crabapples. In September, I make jelly. Then I have breakfast. This would not be without a bee.

It could not be without a bee. The fruit and vegetables I enjoy eating, as well as the roses I admire as centrepieces, all depend on pollination.

Our native pollinators and their habitat are threatened.  Insect populations are declining due to habitat loss, pesticide use, disease and climate change. 75% of flowering plants rely on pollinators to set seed and we humans get one-third of our food from flowering plants.

I invite you to enter this beautiful dining room and consider the importance of pollinators to the enjoyment of your next meal.

Bio

Tracey Lawko employs contemporary textile techniques to showcase changes in our environment. Building on a base of traditional hand-embroidery, free-motion longarm stitching and a love of drawing, her representational work is detailed and “drawn with thread”. Her nature studies draw attention to our native pollinators as she observes them around her studio in the Niagara Escarpment. Many are stitched using a centuries-old, three-dimensional technique called “Stumpwork”.

Tracey’s extensive exhibition history includes solo exhibitions at leading commercial galleries and public museums. Her work has been selected for major North American and International exhibitions, including the Concours International des Mini-Textiles, Musée Jean Lurçat, France, and is held in the permanent collection of the US National Quilt Museum and in private collections in North America and Europe.

Bzzz!

The sound of the mushroom

A May 13, 2022 article by Philip Drost for the Canadian Broadcasting Corporation’s (CBC) As It Happens radio programme highlights the “From funky fungi to melodious mangos, this artist makes music out of nature” segment of the show, Note: Links have been removed,

At the intersection of biology and electronic music, you can find Tarun Nayar plugging his synthesizer equipment into mushrooms and other forms of plant life, hoping to capture their invisible bioelectric rhythms and build them into tranquil soundscapes. 

“What I’m really doing is trying to stimulate joy and wonder and create these little sketches or vignettes using the plants themselves, so I like to think of it as definitely a collaboration,” Nayar told As It Happens guest host Helen Mann.

Nayar is an electronic musician and former biologist in Vancouver who uses his TikTok account and Youtube page, Modern Biology, to show off his serenading spores. And his videos have millions of views.

To make his fungi sing, Nayar uses little jumper cables to connect the vegetation with his synthesizer and measure their biological energy, or bioelectricity, which has an effect on the notes. 

“The mushroom is contributing the pitch changes and the rhythm, and the synthesizer, which I have the mushroom plugged into, is contributing the timbre or the quality of the sound,” Nayar said. 

You may be familiar with Nayar’s work (from a Creative Mornings Vancouver About The Speaker webpage for a talk given on July 3, 2020), Note: Links have been removed,

Tarun Nayar has built his world at intersections. Of east and west. Of music and business. Of science and art. Born to a white Canadian mother and an immigrant Indian father in French Canada, he has always lived in multiple worlds. He is comfortable in discomfort and fascinated with helping people find common ground, opening doors, and equalling the playing field. He is passionate about changing perceptions and championing unheard stories and talent.

rained formally in Indian Classical Music from the age of seven, Tarun’s involvement in Vancouver’s underground electronic music scene in his early 20s led to the formation of well-known Canadian band Delhi 2 Dublin [emphasis mine] in 2006. He has since led the band to Glastonbury (UK), Hardly Strictly Bluegrass (US), Woodford (AUS) and hundreds of other club and festival gigs around the world. Tarun is passionate about creating opportunities in the arts for people of colour. He is Executive Director of 5X Festival [emphasis mine], one of North America’s largest South Asian festivals. He is on the board of Vancouver’s New Forms Festival, the Canadian Live Music Association, and a member of BC’s Ministry of Education Advisory Committee, Vancouver’s Music City Task Force, and Vancouver’s 2018 Juno Host City Committee. Tarun manages emerging Pakistani-Canadian electronic artist Khanvict, and is the co-founder and owner of digital label Snakes x Ladders [emphasis mine] which focuses on the new wave of hybrid South Asian artists.

As best I can determine after looking at the Modern Biology YouTube channel and Tik Tok account, Nayar seems to have started his project or made it public about 10 months ago (August 2021?). There’s lots of mushroom music along with fruit music, and flower music in either location although Tik Tok seems have a more complete collection.

There’s also a Modern Biology page on linktree.ee where you can sign up for an email list. It also features a link to PlantWave, (Note: This is not a product endorsement),

$299.00 USD

Listen to the music of plants. Tune into Nature with PlantWave!

PlantWave allows you to wirelessly connect from your plant to your phone, making it easier than ever to listen to nature’s song.

Pre-orders will ship June of 2022. We sold out of our January run of devices before shipping. Thank you for your patience as we do our best to meet demand for this experience.

Package Includes:

Hardware

PlantWave Plant Music Device

Electrode leads

3 pairs of reusable sticky pads for leaves

Duck beak clips for smaller plants

USB C cable for charging / data transmission

Free iOS / Android App

….

Enjoy!

Sci-fi opera: R.U.R. A Torrent of Light opens May 28, 2022 in Toronto, Canada

Even though it’s a little late, I guess you could call the opera opening in Toronto on May 28, 2022 a 100th anniversary celebration of the word ‘robot’. Introduced in 1920 by Czech playwright Karel Čapek in his play, R.U.R., which stands for ‘Rossumovi Univerzální Roboti’ or, in English, ‘Rossum’s Universal Robots’, the word was first coined by Čapek’s brother, Josef (see more about the play and the word in the R.U.R. Wikipedia entry).

The opera, R.U.R. A Torrent of Light, is scheduled to open at 8 pm ET on Saturday, May 28, 2022 (after being rescheduled due to a COVID case in the cast) at OCAD University’s (formerly the Ontario College of Art and Design) The Great Hall.

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.

As for the opera’s story,

The fictional tech company R.U.R., founded by couple Helena and Dom, dominates the A.I. software market and powers the now-ubiquitous androids that serve their human owners. 

As Dom becomes more focused on growing R.U.R’s profits, Helena’s creative research leads to an unexpected technological breakthrough that pits the couples’ visions squarely against each other. They’ve reached a turning point for humanity, but is humanity ready? 

Inspired by Karel Čapek’s 1920’s science-fiction play Rossum’s Universal Robots (which introduced the word “robot” to the English language), composer Nicole Lizée’s and writer Nicolas Billon’s R.U.R. A Torrent of Light grapples with one of our generation’s most fascinating questions. [emphasis mine]

So, what is the fascinating question? The answer is here in a March 7, 2022 OCAD news release,

Last Wednesday [March 2, 2022], OCAD U’s Great Hall at 100 McCaul St. was filled with all manner of sound making objects. Drum kits, gongs, chimes, typewriters and most exceptionally, a cello bow that produces bird sounds when glided across any surface were being played while musicians, dancers and opera singers moved among them.  

All were abuzz preparing for Tapestry Opera’s new production, R.U.R. A Torrent of Light, which will be presented this spring in collaboration with OCAD University. 

An immersive, site-specific experience, the new chamber opera explores humanity’s relationship to technology. [emphasis mine] Inspired by Karel Čapek’s 1920s science-fiction play Rossum’s Universal Robots, this latest version is set 20 years in the future when artificial intelligence (AI) has become fully sewn into our everyday lives and is set in the offices of a fictional tech company.

Čapek’s original script brought the word robot into the English language and begins in a factory that manufactures artificial people. Eventually these entities revolt and render humanity extinct.  

The innovative adaptation will be a unique addition to Tapestry Opera’s more than 40-year history of producing operatic stage performances. It is the only company in the country dedicated solely to the creation and performance of original Canadian opera. 

The March 7, 2022 OCAD news release goes on to describe the Social Body Lab’s involvement,

OCAD U’s Social Body Lab, whose mandate is to question the relationship between humans and technology, is helping to bring Tapestry’s vision of the not-so-distant future to the stage. Director of the Lab and Associate Professor in the Faculty of Arts & Science, Kate Hartman, along with Digital Futures Associate Professors Nick Puckett and Dr. Adam Tindale have developed wearable technology prototypes that will be integrated into the performers’ costumes. They have collaborated closely with the opera’s creative team to embrace the possibilities innovative technologies can bring to live performance. 

“This collaboration with Tapestry Opera has been incredibly unique and productive. Working in dialogue with their designers has enabled us to translate their ideas into cutting edge technological objects that we would have never arrived at individually,” notes Professor Puckett. 

The uncanny bow that was being tested last week is one of the futuristic devices that will be featured in the performance and is the invention of Dr. Tindale, who is himself a classically trained musician. He has also developed a set of wearable speakers for R.U.R. A Torrent of Light that when donned by the dancers will allow sound to travel across the stage in step with their choreography. 

Hartman and Puckett, along with the production’s costume, light and sound designers, have developed an LED-based prototype that will be worn around the necks of the actors who play robots and will be activated using WIFI. These collar pieces will function as visual indicators to the audience of various plot points, including the moments when the robots receive software updates.  

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design,” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

“New music and theatre are perfect canvases for iterative experimentation. We look forward to the unique fruits of this collaboration and future ones,” he continues. 

Unfortunately, I cannot find a preview but there is this video highlighting the technology being used in the opera (there are three other videos highlighting the choreography, the music, and the story, respectively, if you scroll about 40% down this page),


As I promised, here are the logistics,

University address:

OCAD University
100 McCaul Street,
Toronto, Ontario, Canada, M5T 1W1

Performance venue:

The Great Hall at OCAD University
Level 2, beside the Anniversary Gallery

Ticket prices:

The following seating sections are available for this performance. Tickets are from $10 to $100. All tickets are subject to a $5 transaction fee.

Orchestra Centre
Orchestra Sides
Orchestra Rear
Balcony (standing room)

Performances:

May 28 at 8:00 pm

May 29 at 4:00 pm

June 01 at 8:00 pm

June 02 at 8:00 pm

June 03 at 8:00 pm

June 04 at 8:00 pm

June 05 at 4:00 pm

Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage offers a link to buy tickets but it lands on a page that doesn’t seem to be functioning properly. I have contacted (as of Tuesday, May 24, 2022 at about 10:30 am PT) the Tapestry Opera folks to let them know about the problem. Hopefully soon, I will be able to update this page when they’ve handled the issue.

ETA May 30, 2022: You can buy tickets here. There are tickets available for only two of the performances left, Thursday, June 2, 2022 at 8 pm and Sunday, June 5, 2022 at 4 pm.

Sonifying the protein folding process

A sonification and animation of a state machine based on a simple lattice model used by Martin Gruebele to teach concepts of protein-folding dynamics. First posted January 25, 2022 on YouTube.

A February 17, 2022 news item on ScienceDaily announces the work featured in the animation above,

Musicians are helping scientists analyze data, teach protein folding and make new discoveries through sound.

A team of researchers at the University of Illinois Urbana-Champaign is using sonification — the use of sound to convey information — to depict biochemical processes and better understand how they happen.

Music professor and composer Stephen Andrew Taylor; chemistry professor and biophysicist Martin Gruebele; and Illinois music and computer science alumna, composer and software designer Carla Scaletti formed the Biophysics Sonification Group, which has been meeting weekly on Zoom since the beginning of the pandemic. The group has experimented with using sonification in Gruebele’s research into the physical mechanisms of protein folding, and its work recently allowed Gruebele to make a new discovery about the ways a protein can fold.

A February 17, 2022 University of Illinois at Urbana-Champaign news release (also on EurekAlert), which originated the news item, describes how the group sonifies and animates the protein folding process (Note: Links have been removed),

Taylor’s musical compositions have long been influenced by science, and recent works represent scientific data and biological processes. Gruebele also is a musician who built his own pipe organ that he plays and uses to compose music. The idea of working together on sonification struck a chord with them, and they’ve been collaborating for several years. Through her company, Symbolic Sound Corp., Scaletti develops a digital audio software and hardware sound design system called Kyma that is used by many musicians and researchers, including Taylor.

Scaletti created an animated visualization paired with sound that illustrated a simplified protein-folding process, and Gruebele and Taylor used it to introduce key concepts of the process to students and gauge whether it helped with their understanding. They found that sonification complemented and reinforced the visualizations and that, even for experts, it helped increase intuition for how proteins fold and misfold over time. The Biophysics Sonification Group – which also includes chemistry professor Taras Pogorelov, former chemistry graduate student (now alumna) Meredith Rickard, composer and pipe organist Franz Danksagmüller of the Lübeck Academy of Music in Germany, and Illinois electrical and computer engineering alumnus Kurt Hebel of Symbolic Sound – described using sonification in teaching in the Journal of Chemical Education.

Gruebele and his research team use supercomputers to run simulations of proteins folding into a specific structure, a process that relies on a complex pattern of many interactions. The simulation reveals the multiple pathways the proteins take as they fold, and also shows when they misfold or get stuck in the wrong shape – something thought to be related to a number of diseases such as Alzheimer’s and Parkinson’s.

The researchers use the simulation data to gain insight into the process. Nearly all data analysis is done visually, Gruebele said, but massive amounts of data generated by the computer simulations – representing hundreds of thousands of variables and millions of moments in time – can be very difficult to visualize.

“In digital audio, everything is a stream of numbers, so actually it’s quite natural to take a stream of numbers and listen to it as if it’s a digital recording,” Scaletti said. “You can hear things that you wouldn’t see if you looked at a list of numbers and you also wouldn’t see if you looked at an animation. There’s so much going on that there could be something that’s hidden, but you could bring it out with sound.”

For example, when the protein folds, it is surrounded by water molecules that are critical to the process. Gruebele said he wants to know when a water molecule touches and solvates a protein, but “there are 50,000 water molecules moving around, and only one or two are doing a critical thing. It’s impossible to see.” However, if a splashy sound occurred every time a water molecule touched a specific amino acid, that would be easy to hear.

Taylor and Scaletti use various audio-mapping techniques to link aspects of proteins to sound parameters such as pitch, timbre, loudness and pan position. For example, Taylor’s work uses different pitches and instruments to represent each unique amino acid, as well as their hydrophobic or hydrophilic qualities.

“I’ve been trying to draw on our instinctive responses to sound as much as possible,” Taylor said. “Beethoven said, ‘The deeper the stream, the deeper the tone.’ We expect an elephant to make a low sound because it’s big, and we expect a sparrow to make a high sound because it’s small. Certain kinds of mappings are built into us. As much as possible, we can take advantage of those and that helps to communicate more effectively.”

The highly developed instincts of musicians help in creating the best tool to use sound to convey information, Taylor said.

“It’s a new way of showing how music and sound can help us understand the world. Musicians have an important role to play,” he said. “It’s helped me become a better musician, in thinking about sound in different ways and thinking how sound can link to the world in different ways, even the world of the very small.”

Here’s a link to and a citation for the paper,

Sonification-Enhanced Lattice Model Animations for Teaching the Protein Folding Reaction by Carla Scaletti, Meredith M. Rickard, Kurt J. Hebel, Taras V. Pogorelov, Stephen A. Taylor, and Martin Gruebele. J. Chem. Educ. 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.jchemed.1c00857 Publication Date:February 16, 2022 © 2022 American Chemical Society and Division of Chemical Education, Inc.

This paper is behind a paywall.

For more about sonification and proteins, there’s my March 31, 2022 posting, Classical music makes protein songs easier listening.

Classical music makes protein songs easier listening

Caption: This audio is oxytocin receptor protein music using the Fantasy Impromptu guided algorithm. Credit: Chen et al. / Heliyon

A September 29, 2021 news item on ScienceDaily describes new research into music as a means of communicating science,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Scientists (Peng Zhang, Postdoctoral Researcher in Computational Biology at The Rockefeller University, and Yuzong Chen, Professor of Pharmacy at National University of Singapore [NUS]) wrote a September 29, 2021 essay for The Conversation about their protein songs (Note: Links have been removed),

There are many surprising analogies between proteins, the basic building blocks of life, and musical notation. These analogies can be used not only to help advance research, but also to make the complexity of proteins accessible to the public.

We’re computational biologists who believe that hearing the sound of life at the molecular level could help inspire people to learn more about biology and the computational sciences. While creating music based on proteins isn’t new, different musical styles and composition algorithms had yet to be explored. So we led a team of high school students and other scholars to figure out how to create classical music from proteins.

The musical analogies of proteins

Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet.

A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation.

Protein chains can also fold into wavy and curved patterns with ups, downs, turns and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs.

Protein-to-music algorithms can thus map the structural and physiochemical features of a string of amino acids onto the musical features of a string of notes.

Enhancing the musicality of protein mapping

Protein-to-music mapping can be fine-tuned by basing it on the features of a specific music style. This enhances musicality, or the melodiousness of the song, when converting amino acid properties, such as sequence patterns and variations, into analogous musical properties, like pitch, note lengths and chords.

For our study, we specifically selected 19th-century Romantic period classical piano music, which includes composers like Chopin and Schubert, as a guide because it typically spans a wide range of notes with more complex features such as chromaticism, like playing both white and black keys on a piano in order of pitch, and chords. Music from this period also tends to have lighter and more graceful and emotive melodies. Songs are usually homophonic, meaning they follow a central melody with accompaniment. These features allowed us to test out a greater range of notes in our protein-to-music mapping algorithm. In this case, we chose to analyze features of Chopin’s “Fantaisie-Impromptu” to guide our development of the program.

If you have the time, I recommend reading the essay in its entirety and listening to the embedded audio files.

The September 29, 2021 Cell Press news release on EurekAlert repeats some of the same material but is worth reading on its own merits,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Creating unique melodies from proteins is achieved by using a protein-to-music algorithm. This algorithm incorporates specific elements of proteins—like the size and position of amino acids—and maps them to various musical elements to create an auditory “blueprint” of the proteins’ structure.

“Existing protein music has mostly been designed by simple mapping of certain amino acid patterns to fundamental musical features such as pitches and note lengths, but they do not map well to more complex musical features such as rhythm and harmony,” says senior author Yu Zong Chen, a professor in the Department of Pharmacy at National University of Singapore. “By focusing on a music style, we can guide more complex mappings of combinations of amino acid patterns with various musical features.”

For their experiment, researchers analyzed the pitch, length, octaves, chords, dynamics, and main theme of four pieces from the mid-1800s Romantic era of classical music. These pieces, including Fantasie-Impromptu from Chopin and Wanderer Fantasy from Franz Schubert, were selected to represent the notable Fantasy-Impromptu genre that emerged during that time.

“We chose the specific music style of a Fantasy-Impromptu as it is characterized by freedom of expression, which we felt would complement how proteins regulate much of our bodily functions, including our moods,” says co-author Peng Zhang (@zhangpeng1202), a post-doctoral fellow at the Rockefeller University

Likewise, several of the proteins in the study were chosen for their similarities to the key attributes of the Fantasy-Impromptu style. Most of the 18 proteins tested regulate functions including human emotion, cognition, sensation, or performance which the authors say connect to the emotional and expressive of the genre.

Then, they mapped 104 structural, physicochemical, and binding amino acid properties of those proteins to the six musical features. “We screened the quantitative profile of each amino acid property against the quantized values of the different musical features to find the optimal mapped pairings. For example, we mapped the size of amino acid to note length, so that having a larger amino acid size corresponds to a shorter note length,” says Chen.

Across all the proteins tested, the researchers found that the musicality of the proteins was significantly improved. In particular, the protein receptor for oxytocin (OXTR) was judged to have one of the greatest increases in musicality when using the genre-guided algorithm, compared to an earlier version of the protein-to-music algorithm.

“The oxytocin receptor protein generated our favorite song,” says Zhang. “This protein sequence produced an identifiable main theme that repeats in rhythm throughout the piece, as well as some interesting motifs and patterns that recur independent of our algorithm. There were also some pleasant harmonic progressions; for example, many of the seventh chords naturally resolve.”

The authors do note, however, that while the guided algorithm increased the overall musicality of the protein songs, there is still much progress to be made before it resembles true human music.

“We believe a next step is to explore more music styles and more complex combinations of amino acid properties for enhanced musicality and novel music pieces. Another next step, a very important step, is to apply artificial intelligence to jointly learn complex amino acid properties and their combinations with respect to the features of various music styles for creating protein music of enhanced musicality,” says Chen.

###

Research supported by the National Key R&D Program of China, the National Natural Science Foundation of China, and Singapore Academic Funds.

Here’s a link to and a citation for the paper,

Protein Music of Enhanced Musicality by Music Style Guided Exploration of Diverse Amino Acid Properties by Nicole WanNi Tay, Fanxi Liu, Chaoxin Wang, Hui Zhang, Peng Zhang, Yu Zong Chen. Heliyon, 2021 DOI: https:// doi.org/10.1016/j.heliyon.2021.e07933 Published; September 29, 2021

This paper appears to be open access.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Sounds of Central African Landscapes; a Cornell (University) Elephant Listening Project

This September 13, 2021 news item about sound recordings taken in a rainforest (on phys.org) is downright fascinating,

More than a million hours of sound recordings are available from the Elephant Listening Project (ELP) in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology—a rainforest residing in the cloud.

ELP researchers, in collaboration with the Wildlife Conservation Society, use remote recording units to capture the entire soundscape of a Congolese rainforest. Their targets are vocalizations from endangered African forest elephants, but they also capture tropical parrots shrieking, chimps chattering and rainfall spattering on leaves to the beat of grumbling thunder.

For someone who suffers from acrophobia (fear of heights), this is a disturbing picture (how tall is that tree? is the rope reinforced? who or what is holding him up? where is the photographer perched?),

Frelcia Bambi is a member of the Congolese team that deploys sound recorders in the rainforest and analyzes the data. Photo by Sebastien Assoignons, courtesy of the Wildlife Conservation Society.

A September 13, 2021 Cornell University (NY state, US) news release by Pat Leonard, which originated the news item, provides more details about the sounds themselves and the Elephant Listening Project,

“Scientists can use these soundscapes to monitor biodiversity,” said ELP director Peter Wrege. “You could measure overall sound levels before, during and after logging operations, for example. Or hone in on certain frequencies where insects may vocalize. Sound is increasingly being used as a conservation tool, especially for establishing the presence or absence of a species.”

For the past four years, 50 tree-mounted recording units have been collecting data continuously, covering a region that encompasses old logging sites, recent logging sites and part of the Nouabalé-Ndoki National Park in the Republic of the Congo. The sensors sometimes capture the booming guns of poachers, alerting rangers who then head out to track down the illegal activity.

But everyday nature lovers can tune in rainforest sounds, too.

“We’ve had requests to use some of the files for meditation or for yoga,” Wrege said. “It is very soothing to listen to rainforest sounds—you hear the sounds of insects, birds, frogs, chimps, wind and rain all blended together.”

But, as Wrege and others have learned, big data can also be a big problem. The Sounds of Central African Landscapes recordings would gobble up nearly 100 terabytes of computer space, and ELP takes in another eight terabytes every four months. But now, Amazon Web Services is storing the jungle sounds for free under its Open Data Sponsorship Program, which preserves valuable scientific data for public use.

This makes it possible for Wrege to share the jungle sounds and easier for users to analyze them with Amazon tools so they don’t have to move the massive files or try to download them.

Searching for individual species amid the wealth of data is a bit more daunting. ELP uses computer algorithms to search through the recordings for elephant sounds. Wrege has created a detector for the sounds of gorillas beating their chests. There are software platforms that help users create detectors for specific sounds, including Raven Pro 1.6, created by the Cornell Lab’s bioacoustics engineers. Wrege says the next iteration, Raven 2.0, will make this process even easier.

Wrege is also eyeing future educational uses for the recordings which he says could help train in-country biologists to not only collect the data but do the analyses. This is gradually happening now in the Republic of the Congo—ELP’s team of Congolese researchers does all the analysis for gunshot detection, though the elephant analyses are still done at ELP.

“We could use these recordings for internships and student training in Congo and other countries where we work, such as Gabon,” Wrege said. “We can excite young people about conservation in Central Africa. It would be a huge benefit to everyone living there.”

To listen or download clips from Sounds of the Central African Landscape, go to ELP’s data page on Amazon Web Services. You’ll need to create an account with AWS (choose the free option). Then sign in with your username and password. Click on the “recordings” item in the list you see, then “wav/” on the next page. From there you can click on any item in the list to play or download clips that are each 1.3 GB and 24 hours long.

Scientists looking to use sounds for research and analysis should start here.

World Conservation Society Forest Elephant Congo [downloaded from https://congo.wcs.org/Wildlife/Forest-Elephant.aspx]

What follows may be a little cynical but I can’t help noticing that this worthwhile and fascinating project will result in more personal and/or professional data for Amazon since you have to sign up even if all you’re doing is reading or listening to a few files that they’ve made available for the general public. In a sense, Amazon gets ‘paid’ when you give up an email address to them. Plus, Amazon gets to look like a good world citizen.

Let’s hope something greater than one company’s reputation as a world citizen comes out of this.