AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

Leave a Reply

Your email address will not be published. Required fields are marked *