Tag Archives: Philippe Pasquier

Highlights from Simon Fraser University’s (SFU) April 2025 Metacreation Lab newsletter

There’s a local (Vancouver, Canada) event coming up, as well as, a call for papers, an opportunity to watch a workshop presented in Toronto, Montréal, and Berlin and more in these highlights from the April 2025 issue of the Metacreation Lab for Creative AI newsletter (received via email). The first items are being listed in date order.

Ars Electronica and Vancouver AI [artificial intelligence] Community Meetup

From the April 2025 Metacreation Lab newsletter,

Call for Papers – EXPANDED 2025 at Ars Electronica

The 13th edition of the EXPANDED Conference, focusing on animation and interactive art, will be held from September 3–5, 2025, at the Ars Electronica Center in Linz, Austria, as part of the Ars Electronica Festival.

Organized in cooperation with ACM [Association for Computing Machinery], the conference invites submissions in categories of Research Papers and Art Papers. Topics of interest include AI-generated images, generative art, virtual production, human-AI collaboration, XR, and more.

Submission deadline: April 27, 2025

More Information

Vancouver AI – April 30 [2025] at the H.R. MacMillan Space Centre

Metacreation Lab has proudly supported the Vancouver AI Community Meetup since the beginning. This time, Mission #16 of BC’s vibrant AI community meetup series. This edition features a talk by Philippe Pasquier, exploring the latest in generative and creative AI systems.

Also on the lineup is a special performance by K-PHI-A, a live trio featuring Philippe, PhD student Keon Ju Maverick Lee, and VJ Amagi (Jun Yuri). Their piece, Revival, is an improvisational audiovisual performance where human musicians and AI agents co-create in real time. It blends percussion, electronics, and AI-driven visuals using Autolume and other systems developed at the Metacreation Lab.

Event info and tickets

Ars Electronica: what is it?

Ars Electronica started life as a festival in 1979 still being produced annually and is now a larger enterprise. From the Ars Electronica About webpage,.Note Links have been removed

Art, Technology, Society

We have been analyzing and commenting on the Digital Revolution since 1979. Since then, we have been developing projects, strategies and competencies for the Digital Transformation. Together with artists, scientists, technologists, designers, developers, entrepreneurs and activists from all over the world, we address the central questions of our future. The focus is on new technologies and how they change the way we live and work together.

A new festival. The first Ars Electronica begins on September 18, 1979. 20 artists and scientists from all over the world gather at this new “Festival for Art, Technology and Society” in Linz to discuss the Digital Revolution and its possible consequences. This Ars Electronica is small, but groundbreaking. The initiative for this came from Hannes Leopoldseder (AT), director of the Upper Austria regional studio of the Austrian Broadcasting Company (ORF), who is passionate about everything that has to do with the future. Together with the electronic musician Hubert Bognermayr (AT), the music producer Ulli A. Rützel (DE) and the cyberneticist and physicist Herbert W. Franke (AT), he lays the foundation stone for a festival that will become the world’s largest and most important of its kind.

Between art, technology and society. Over the past four decades, a number of pioneers have turned Ars Electronica into a creative ecosystem that now enjoys a worldwide reputation.
 
Since 1979 we celebrate once a year the Ars Electronica Festival. More than 1,000 artists, scientists, developers, designers, entrepreneurs and activists are coming to Linz, Austria, to address central questions of our future. For five days, everything revolves around groundbreaking ideas and grand visions, unusual prototypes and innovative collaborations, inspiring art and groundbreaking research, extraordinary performances and irritating interventions, touching sounds and rousing concerts.
 
Since 1987 we have been awarding the Prix Ars Electronica every year. With several competition categories, we search for groundbreaking projects that revolve around questions of our digital society and rehearse the innovative use of technologies, promising strategies of collaboration and new forms of artistic expression. The best submissions will receive a Golden Nica, considered by the global media art scene to be the most traditional and prestigious award ever.
 
Since 1996 we have been working at the Ars Electronica Center year after year with tens of thousands of kindergarten children, pupils, apprentices and students on questions concerning the ever-increasing digitalization of our world. The focus is on the potential of the next Game Changer: Artificial Intelligence.
 
Also since 1996 we operate the Ars Electronica Futurelab, whose international and interdisciplinary team of artists and scientists is researching the future. With interactive scenarios, we prepare central aspects of the Digital Revolution for the general public in order to initiate a democratic discourse.
 
1998 we initiated create your world. The year-round programme is developed together with young people and includes a competition for under 19 year olds, a festival of its own and a tour through the region. We see create your world as an invitation and challenge at the same time and want to encourage young people to leave the role as mere users of technology behind, to discover new possibilities of acting and designing and to implement their own ideas.
 
2004 we started Ars Electronica Export with a big exhibition in New York. Since then we have been to Abuja, Athens, Bangkok, Beijing, Berlin, Bilbao, Brussels, Buenos Aires, Doha, Florence, Kiev, London, Madrid, Mexico City, Moscow, Mumbai, Osaka, Sao Paulo, Seoul, Shanghai, Singapore, Tokyo, Tunis, Venice and Zaragoza. Together with partners from art and culture, science and education, business and industry, we organize exhibitions and presentations, conferences and workshops, performances and interventions at all these locations.
 
Since 2013 our team at Ars Electronica Solutions has been developing market-ready products inspired by visions and prototypes from the artistic cosmos of Ars Electronica. We develop innovative, individual and interactive products and services for exhibitions, brands, trade fairs and events.
 
Since 2016 we are active all year round in Japan. Especially in Tokyo and Osaka we work together with leading Japanese universities, museums and companies, develop and present artistic projects, design workshop series and Open Labs and dedicate ourselves to the future of our digital society in conferences.
 
In order to actively shape the digital revolution, people are needed who have a feel for change and recognize connections, develop new strategies and set a course. This is precisely where the 2019 created Future Thinking School aims to support companies and institutions.
 
Whether at home in the living room or in the office, whether in the classroom or in the lecture hall, in the streetcar or subway, on the train – from everywhere Home Delivery accompanies our virtual visitors on an artistic-scientific journey into our future since 2020.
 
All our activities since September 18, 1979 have been documented in the form of texts, images and videos and stored in the Ars Electronica Archive. This archive provides us with a unique collection of descriptions and documentations of more than 75,000 projects from four decades of Ars Electronica.

The Conference

The conference (September 3 – 5, 2025) is held as part of the festival (September 3 – 7, 2025). The festival’s theme is PANIC yes/no. As for the conference, it does not seem to have a theme, from the Ars Electronica Expanded 2025 Conference on Animation and Interactive Art webpage, Note: A link has been removed,

The Expanded Conference (Expanded 2025) will take place from September 3rd to 5th as part of the Ars Electronica Festival 2025. This call for paper focuses on academic papers in the field of Expanded Animation and Interactive Art that explore and experiment with visual expression at the intersection of art, technology, and society. We will have two categories (Research Paper and Art Paper), where submissions will undergo a rigorous review process. All selected speakers will be given a free pass to the Ars Electronica Festival (September 3rd to 7th).

Topics of interest include, but are not limited to:

  • 3D Scanning
  • AI-generated Images
  • AI-based artworks
  • Artistic Computer Animation
  • Art & Science collaboration projects
  • Audio-visual Experiments
  • Data Journalism and Animated Documentary
  • Data Visualizations
  • Digital Media Art History
  • Digital, Hybrid, and Expanded Theater
  • Expanded Animation
  • Generative Art
  • Human-AI interaction and Human-AI collaboration
  • Hybrids between Animation and Game
  • Media Facades
  • Music Visualization
  • New approaches to artistic research and practice-based methodologies
  • Participatory art projects
  • Performance Projects
  • Playful Interactions and Experiences
  • Projection Mapping
  • Projects using NFT, Metaverse, Social Media
  • Reactive and Interactive audio/visual Work
  • Real-time CG
  • Scientific Visualizations
  • Site-specific Installations
  • Sound Art and Soundscapes
  • Tangible Interfaces and New Forms of Experiences
  • Transmedia Narratives
  • Virtual Humans and Environments
  • Virtual Production
  • VR, AR, MR, XR

Again, the submission date for your paper is April 27, 2025. Good luck!

Vancouver AI Community Meetup

Prepare yourself for some sticker shock. Tickets for the meetup are listed at $63.00. As noted earlier, there will be a “talk by Philippe Pasquier, exploring the latest in generative and creative AI systems.and a special performance by K-PHI-A, a live trio featuring Philippe, PhD student Keon Ju Maverick Lee, and VJ Amagi (Jun Yuri). Their piece, Revival, is an improvisational audiovisual performance where human musicians and AI agents co-create in real time.”

Here’s more about Vancouver AI meetups in a video, which appears to have been excerpted from the March 2025 meetup,

Here’s more about the event from its YouTube webpage,

We Don’t Do Panels. We Do Portals. Vancouver AI: March 2025 Recap

This wasn’t a meetup. It was a lightning strike. A 3-hour detonation of mind, matter, and machine where open-source fire met ancestral spirit, and the UFO building lit up like a neural rave.

—————-

⚡ What Went Down:

Damian George (Steloston) & his son Ethan kicked the night off with a warrior’s welcome—Indigenous songs from Tsleil-Waututh territory that cracked open the veil and set the frequency.

—————-

Cai & Charlie spun lo-fi beats with a side of C++ sorcery. DIY synths, live visuals, and analog rebellion powered by AI hacks and imagination. This is what machine-human symbiosis sounds like.

—————-

Michael Tippett dropped cinematic subversion with Mr. Canada, a gonzo AI-generated political series where satire meets social critique and deepfakes become truth bombs. (The king has a button that disables the F-35 fleet—yeah, that happened.)

—————-

Cian Whalley, Zen priest & CTO, took us beyond the binary—teaching us how emotion, code, and consciousness intersect like neural lace. Toyota factory metaphors and Digital Buddha hotlines included.

—————-

Philippe Pasquier, the SFU professor we don’t deserve, taught us how to train your own AI models on your art. No scraping, no stealing. Just artists owning their data and their destiny. Bonus: transparent LED cubes and a revival performance next month with AI-powered music agents. 🔮🎶

—————-

Michelle from Women X AI showed us what a real grassroots intelligence network looks like: 45+ women in tech meeting monthly, giving back to the DTES, and building equity into the foundation of AI.

—————-

Niels showed us what radical vulnerability looks like—raw stories of startup survival, burnout, almost crashing (literally), and choosing sustainable hustle over hypergrowth hype.

—————-

Loki Jorgensen repped the new Mind, AI, and Consciousness crew—channeling 2,000 years of philosophical grind into one big ontological jam session. Curious cats only.

—————-

Patrick Pennefather & Kevin the Pixel Wizard rolled out UBC’s AI video lab with student creators turning prompts into art and AI into cinema. Kevin’s mentorship = 🔥.

—————-

Brittany Smila, our resident poet laureate, slayed the crowd with a poem that read like a bootleg instruction manual for being human. Typos included. Plum cake recipes too.

—————-

Darby stepped up with real UX [user experience design] energy—running card sorts and mapping our collective brain to build a proper web infrastructure for the VAI [Vancouver artificial intelligence] hive mind. Web3 who?

—————-

Rival Technologies’ Julia & Dale announced our first-ever Data Storytelling Hackathon. $2,500 prize, survey data that slaps, and a chance to show how AI can amplify truth instead of burying it. (Brittany wrote the hot dog prompt, you’re welcome.)

—————-

Cloud Summit’s YK Sugi, Bibi Souza & Andre made waves repping an all-volunteer, all-heart community cloud event coming in hot during Web Summit week. Code meets care. Sponsors fund causes. Real ones only. Fergus dropped serious policy weight—WOSK Centre for Dialogue BC AI report now live. If you want a seat at the government table, this is your guy.

—————-

Kushal closed the night with a flamethrower. Called out UBC’s xenophobic DeepSeek ban. Defended open-source warriors from China and France (💥shoutout Mistral). No prisoners. No apologies. Just truth. –

—————

Khayyam Wakil wrapped it all up with the keynote of the night: a design rebel’s journey from Saskatoon boats to LA VR labs to immersive media Emmys. Lessons in surrender, reinvention, and the real art of quitting right. 🔥

—————-

📍Location: H.R. MacMillan Space Centre — Vancouver, BC (aka the UFO mothership)

🪐 Astronomers on deck. Observatories open till 11. Community stays weird till 10. 🎧 Full audio, speaker list & projects: vancouver.bc-ai.net

🎟️ Next portal opens May 28: lu.ma/VAI17 🖤❤️✊🔥🏴

We don’t do TED Talks. We host real-time cultural reckonings. This is AI for the people—and it’s only getting louder. Bring your edge. Bring your stickers. Bring your weird.

You can go here to get your ticket for the April 30, 2025 Vancouver AI Community Meetup and to find out about more about some of the AI events in Vancouver. You may want to check out the possibility of getting an annual pass or membership in the hope of making attendance more affordable.

Two papers and two workshop recordings from the Metacreation Lab

From the April 2025 Metacreation Lab newsletter,

Missed the Autolume Workshop? Watch It Online Now

After holding Autolume workshops in Toronto, Montreal, and Berlin, we brought the Autolume workshop online earlier in April, and the recordings are now available.

Whether you’re new to Autolume or want a refresher, this hands-on session walks you through training your own generative models, creating real-time visuals, and exploring interactive art, all without writing a single line of code.

Join our mailing list and hop into the Discord channel to stay in the loop and connect with others using Autolume.

Watch Workshop Recordings

Metacreation Lab at ISEA2025 in Seoul

The Metacreation Lab will be at ISEA 2025 with both a paper presentation and a live performance.

PhD student Arshia Sobhan, with Dr. Philippe Pasquier and Dr. Gabriela Aceves-Sepúlveda, will present “Broken Letters, Broken Narratives: A Case Study on Arabic Script in DALL-E 3”. This critical case study examines how text-to-image generative AI systems, such as DALL-E 3, misrepresent Arabic calligraphy, linking these failures to historical biases and Orientalist aesthetics.

The preprint is available here: https://arxiv.org/abs/2502.20459

In collaboration with sound artist Joshua Rodenberg, Arshia will also present “Reprising Elements,” an audiovisual performance combining Persian calligraphy, sound art, and generative AI powered by Autolume. This performance is an artistic endeavour that celebrates the fusion of time-honoured techniques with modern advancements.

Watch: https://youtu.be/ykNt7lNeL34?si=AIQUGKFDD0iVgt99

MIDI-GPT Paper Now Available in AAAI Proceedings

Our paper “MIDI-GPT: A Controllable Generative Model for Computer-Assisted Multitrack Music Composition” is now officially published in the proceedings of the 39th AAAI Conference on Artificial Intelligence.

MIDI-GPT leverages Transformer architecture to infill musical material at both track and bar levels, with controls for instrument type, style, note density, polyphony, and more. Our experiments show it generates original, stylistically coherent compositions while avoiding duplication from its training data. The system is already making waves through industry collaborations and artistic projects.

Read the paper

A quick note about ISEA 2025

I wrote about the upcoming symposium in my April 16, 2025 posting, “International Symposium on Electronic/Emerging Art 2025 (May 23 – 29, 2025) in Seoul, Korea.” If nothing else, you might want to check out the “ISEA theme, ‘동동 (憧憧, Dong-Dong): Creators’ Universe’, May 23 – 29, 2025 in Seoul” subsection. The theme intrigues me greatly.

Highlights from Simon Fraser University’s (SFU) July 2024 Metacreation Lab newsletter

There’s some exciting news for people interested in Ars Electronica (see more below the newsletter excerpt) and for people who’d like to explore some of the same work from the Metacreation Lab in a locale that may be closer to their homes, there’s an exhibition on Saltspring Island, British Columbia. Here are details from SFU’s Metacreation Lab newsletter, which hit my mailbox on July 22, 2024,

Metacreation Lab at Ars Electronica 2024

We are delighted to announce that the Metacreation Lab for Creative AI will be part of the prestigious Ars Electronica Festival. This year’s festival, titled “HOPE – who will turn the tide,” will take place in Linz [Austria’ from September 4 to 8.[2024]

Representing the School of Interactive Arts and Technology (SIAT), we will showcase four innovative artworks. “Longing + Forgetting” by Philippe Pasquier, Matt Gingold, and Thecla Schiphorst explores pathfinding algorithms as metaphors for our personal and collective searches for solutions. “Autolume Mzton” by Jonas Kraasch and Philippe Pasquier examines the concept of birth through audio-reactive generative visuals. “Dreamscape” [emphasis mine] by Erica Lapadat-Janzen and Philippe Pasquier utilizes the Autolume system to train AI models with the artist’s own works, creating unique stills and video loops. “Ensemble” by Arshia Sobhan and Philippe Pasquier melds traditional Persian calligraphy with AI to create dynamic calligraphic forms.

We look forward to seeing you there!

More Information

MMM4Live Official Release; Generative MIDI in Ableton Live

We are ecstatic to release our Ableton plugin for computer-assisted music composition! Meet MMM4Live, our flexible and generic multi-track music AI generator. MMM4Live embeds our state-of-the-art music transformer model that allows generating fitting original musical patterns in any style! When generating, the AI model considers the request parameters, your instrument choice, and the existing musical MIDI content within your Ableton Live project to deliver relevant material. With this infilling approach, your music is the prompt!

We, at the Metacreation Lab for Creative AI at Simon Fraser University (SFU), are excited about democratizing and pushing the boundaries of musical creativity through academic research and serving diverse communities of creatives.

For additional inquiries, please do not hesitate to reach out to pasquier@sfu.ca

Try it out!

“Dreamscape” at the Provocation Exhibition

We are excited to announce that “Dreamscape,” a collaboration between Erica Lapadat-Janzen and Philippe Pasquier, will be exhibited at the Provocation exhibition from July 6th to August 10th, 2024.

In response to AI-generated art based on big data, the Metacreation Lab developed Autolume, a no-coding environment that allows artists to train AI models using their chosen works. For “Dreamscape,” the Metacreation Lab collaborated with Vancouver-based visual artist Erica Lapadat-Janzen. Using Autolume, they hand-picked and treated 12 stills and 9 video loops, capturing her unique aesthetic. Lapadat-Janzen’s media artworks, performances, and installations draw viewers into a world of equilibrium, where moments punctuate daily events to clarify our existence and find poetic meaning.

Provocation exhibition brings artists and audiences together to celebrate and provoke conversations about contemporary living. The exhibition is at 215 Baker Rd, Salt Spring Island, BC, and is open to the public (free admission) every Saturday and Sunday from 12-4 pm.

More Information

Ars Electronica

It is both an institute and a festival, from the Ars Electronica Wikipedia entry, Note: Links have been removed,

Ars Electronica Linz GmbH is an Austrian cultural, educational and scientific institute active in the field of new media art, founded in Linz in 1979. It is based at the Ars Electronica Center (AEC), which houses the Museum of the Future, in the city of Linz. Ars Electronica’s activities focus on the interlinkages between art, technology and society. It runs an annual festival, and manages a multidisciplinary media arts R&D facility known as the Futurelab. It also confers the Prix Ars Electronica awards.

Ars Electronica began with its first festival in September 1979. …

The 2024 festival, as noted earlier, has the theme of ‘Hope’, from the Ars Electronica 2024 festival theme page,

HOPE

Optimism is not the belief that things will somehow work out, but rather the confidence in our ability to influence and bring about improvement. And that perhaps best describes the essence of the principle of hope, not as a passive position, but as an active force that motivates us to keep going despite adversity.

But don’t worry, this year’s festival will not be an examination of the psychological or even evolutionary foundations of the principle of hope, nor will it be a reflection on our unsteady fluctuation between hope and pessimism.

“HOPE” as a festival theme is not a resigned statement that all we can do is hope that someone or something will solve our problems, but rather a manifestation that there are actually many reasons for hope. This is expressed in the subtitle “who will turn the tide”, which does not claim to know how the turnaround can be achieved, but rather focuses on who the driving forces behind this turnabout are.

The festival’s goal is to spotlight as many people as possible who have already set out on their journey and whose activities—no matter how big or small—are a very concrete reason to have hope.

Believing in the possibility of change is the prerequisite for bringing about positive change, especially when all signs point to the fact that the paths we are currently taking are often dead ends.

But belief alone will not be enough; it requires a combination of belief, vision, cooperation, and a willingness to take concrete action. A willingness that we need, even if we are not yet sure how we will turn the tide, how we will solve the problems, and how we will deal with the effects of the problems that we are (no longer) able to solve.

Earlier, I highlighted ‘Dreamscape’ which can be seen at Ars Electronica 2024 or at the “Provocation” exhibition on Salt Spring Island. Hopefully, you have an opportunity to visit one of the locations. As for the Metacreation Lab for Creative AI, you can find out more here.

Highlights from Simon Fraser University’s (SFU) June 2024 Metacreation Lab newsletter

The latest newsletter from the Metacreation Lab for Creative AI (at Simon Fraser University [SFU]), features a ‘first’. From the June 2024 Metacreation Lab newsletter (received via email),

“Longing + Forgetting” at the 2024 Currents New Media Festival in Santa Fe

We are thrilled to announce that Longing + Forgetting has been invited to the esteemed Currents New Media Festival in Santa Fe, New Mexico. Longing + Forgetting is a generative audio-video installation that explores the relationship between humans and machines. This media art project, created by Canadian artists Philippe Pasquier and Thecla Schiphorst alongside Australian artist Matt Gingold, has garnered international acclaim since its inception. Initially presented in Canada in 2013, the piece has journeyed through multiple international festivals, captivating audiences with its exploration of human expression through movement.

Philippe Pasquier will be on-site for the festival, overseeing the site-specific installation at El Museo Cultural de Santa Fe. This marks the North American premiere of the redeveloped version of “Longing + Forgetting,” featuring a new soundtrack by Pasquier based solely on the close-mic recording of dancers.

Currents New Media Festival runs June 14–23, 2024 and brings together the work of established and emerging new media artists from around the world across various disciplines, with an expected 9,000 visitors during the festival’s run.

More Information

Discover “Longing + Forgetting” at Bunjil Place in Melbourne

We are excited to announce that “Longing + Forgetting” is being featured at Bunjil Place in Melbourne, Australia. As part of the Art After Dark Program curated by Angela Barnett, this outdoor screening will run from June 1 to June 28, illuminating the night from 5 pm to 7 pm.

More Information

Presenting “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with GANs” at SIGGRAPH 2024

We are pleased to share that our paper, “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with Generative Adversarial Networks,” will be presented at SIGGRAPH 2024, the premier conference on computer graphics and interactive techniques. The event will take place from July 28 to August 1, 2024, in Denver, Colorado.

This paper delves into the artistic potential of Generative Adversarial Networks (GANs) to create and innovate within the realm of calligraphic Arabic script, particularly the nastaliq style. By developing two custom datasets and leveraging the StyleGAN2-ada architecture, we have generated high-quality, stylistically coherent calligraphic samples. Our work bridges the gap between traditional calligraphy and modern technology and offers a new mode of creative expression for this artform.

SIGGRAPH’24

For those unfamiliar with the acronym, SIGGRAPH stands for special interest group for computer graphics and interactive techniques. SIGGRAPH is huge and it’s a special interest group (SIG) of the ACM (Association for Computing Machinery).

If memory serves, this is the first time I’ve seen the Metacreation Lab make a request for volunteers, from the June 2024 Metacreation Lab newsletter,

Are you interested in music-making and AI technology?

The Metacreation Lab for Creative AI at Simon Fraser University (SFU), is conducting a research study in partnership with Steinberg Media Technologies GmbH. We are testing and evaluating MMM-Cubase v2, a creative AI system for assisting composing music. The system is based on our best music transformer, the multitrack music machine (MMM), which can generate, re-generate or complete new musical content based on existing content.

There is no prerequisite for this study beyond a basic knowledge of DAW and MIDI. So everyone is welcome even if you do not consider yourself a composer, but are interested in trying the system. The entire study should take you around 3 hours, and you must be 19+ years old. Basic interest and familiarity with digital music composition will help, but no experience with making music is required.

We seek to better evaluate the potential for adoption of such systems for novice/beginner as well as for seasoned composers. More specifically, you will be asked to install and use the system to compose a short 4-track musical composition and to fill out a survey questionnaire at the end.

Participation in this study is rewarded with one free Steinberg software license of your choice among Cubase Element, Dorico Element or Wavelab Element.

For any question or further inquiry, please contact researcher Renaud Bougueng Tchemeube directly at rbouguen@sfu.ca.

Enroll in the Study

You can find the Metacreation Lab for Creative AI website here.

Metacreation Lab’s greatest hits of Summer 2023

I received a May 31, 2023 ‘newsletter’ (via email) from Simon Fraser University’s (SFU) Metacreation Lab for Creative Artificial Intelligence and the first item celebrates some current and past work,

International Conference on New Interfaces for Musical Expressions | NIME 2023
May 31 – June 2 | Mexico City, Mexico

We’re excited to be a part of NIME 2023, launching in Mexico City this week! 

As part of the NIME Paper Sessions, some of Metacreation’s labs and affiliates will be presenting a study based on case studies of musicians playing with virtual musical agents. Titled eTu{d,b}e, the paper was co-authored by Tommy Davis, Kasey LV Pocius, and Vincent Cusson, developers of the eTube instrument, along with music technology and interface researchers Marcelo Wanderley and Philippe Pasquier. Learn about the project and listen to sessions involving human and non-human musicians.

This research project involved experimenting with Spire Muse, a virtual performance agent co-developed by Metacreation Lab members. The paper introducing the system was awarded the best paper award at the 2021 International Conference on New Interfaces for Musical Expression (NIME). 

Learn more about the NIME2023 conference and program at the link below, which will also present a series of online music concerts later this week.

Learn more about NIME 2023

Coming up later this summer and also from the May 31, 2023 newsletter,

Evaluating Human-AI Interaction for MMM-C: a Creative AI System for Music Composition | IJCAI [2023 International Joint Conference on Artificial Intelligence] Preview

For those following the impact of AI on music composition and production, we would like to share a sneak peek of a review of user experiences using an experimental AI-composition tool [Multi-Track Music Machine (MMM)] integrated into the Steinberg Cubase digital audio workstation. Conducted in partnership with Steinberg, this study will be presented at the 2023 International Joint Conference on Artificial Intelligence (IJCAI2023), as part of the Arts and Creativity track of the conference. This year’s IJCAI conference taking place in Macao from August 19th to Aug 25th, 2023.

The conference is being held in Macao (or Macau), which is officially (according to its Wikipedia entry) the Macao Special Administrative Region of the People’s Republic of China (MSAR). It has a longstanding reputation as an international gambling and party mecca comparable to Las Vegas.

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

Night of ideas/Nuit des idées 2022: (Re)building Together on January 27, 2022 (7th edition in Canada)

Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).

Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,

Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.

“(Re)building together

For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.

According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.

The French embassy in Canada (Ambassade de France au Canada) has a Night of Ideas/Nuit des idées 2022 webpage listing the Canadian events (Note: The times are local, e.g., 5 pm in Ottawa),

Ottawa: (Re)building through the arts, together

Moncton: (Re)building Together: How should we (re)think and (re)habilitate the post-COVID world?

Halifax: (Re)building together: Climate change — Building bridges between the present and future

Toronto: A World in Common

Edmonton: Introduction of the neutral pronoun “iel” — Can language influence the construction of identity?

Vancouver: (Re)building together with NFTs

Victoria: Committing in a time of uncertainty

Here’s a little more about the Vancouver event, from the Night of Ideas/Nuit des idées 2022 webpage,

Vancouver: (Re)building together with NFTs [non-fungible tokens]

NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.

Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.

This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.

- 7 p.m on Zoom (registration here) Event broadcast online on France Canada Culture’s Facebook. In English.

Not all of the events are in both languages.

One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

Art, sound, AI, & the Metacreation Lab’s Spring 2021 newsletter

The Metacreation Lab’s Spring 2021 newsletter (received via email) features a number of events either currently taking place or about to take place.

2021 AI Song Contest

2021 marks the 2nd year for this international event, an artificial intelligence/AI Song Contest 2021. The folks at Simon Fraser University’s (SFU) Metacreation Lab have an entry for the 2021 event, A song about the weekend (and you can do whatever you want). Should you click on the song entry, you will find an audio file, a survey/vote consisting of four questions and, if you keep scrolling down, more information about the creative, team, the song and more,

Driven by collaborations involving scientists, experts in artificial intelligence, cognitive sciences, designers, and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, whether these are embedded in interactive experiences or automating workflows integrated into cutting-edge creative software.

Team:

Cale Plut (Composer and musician) is a PhD Student in the Metacreation lab, researching AI music applications in video games.

Philippe Pasquier (Producer and supervisor) is an Associate Professor, and leads the Metacreation Lab. 

Jeff Ens (AI programmer) is a PhD Candidate in the Metacreation lab, researching AI models for music generation.

Renaud Tchemeube (Producer and interaction designer) is a PhD Student in the Metacreation Lab, researching interaction software design for creativity.

Tara Jadidi (Research Assistant) is an undergraduate student at FUM, Iran, working with the Metacreation lab.

Dimiter Zlatkov (Research Assistant) is an undergraduate student at UBC, working with the Metacreation lab.

ABOUT THE SONG

A song about the weekend (and you can do whatever you want) explores the relationships between AI, humans, labour, and creation in a lighthearted and fun song. It is co-created with the Multi-track Music Machine (MMM)

Through the history of automation and industrialization, the relationship between the labour magnification power of automation and the recipients of the benefits of that magnification have been in contention. While increasing levels of automation are often accompanied by promises of future leisure increases, this rarely materializes for the workers whose labour is multiplied. By primarily using automated methods to create a “fun” song about leisure, we highlight both the promise of AI-human cooperation as well as the disparities in its real-world deployment. 

As for the competition itself, here’s more from the FAQs (frequently asked questions),

What is the AI Song Contest?

AI Song Contest is an international creative AI contest. Teams from all over the world try to create a 4-minute pop song with the help of artificial intelligence.

When and where does it take place?

Between June 1, 2021 and July 1, 2021 voting is open for the international public. On July 6 there will be multiple online panel sessions, and the winner of the AI Song Contest 2021 will be announced in an online award ceremony. All sessions on July 6 are organised in collaboration with Wallifornia MusicTech.

How is the winner determined?

Each participating team will be awarded two sets of points: one a public vote by the contest’s international audience, the other the determination of an expert jury.

Anyone can evaluate as many songs as they like: from one, up to all thirty-eight. Every song can be evaluated only once. Even though it won’t count in the grand total, lyrics can be evaluated too; we do like to determine which team wrote the best accoring to the audience.

Can I vote multiple times for the same team?

No, votes are controlled by IP address. So only one of your votes will count.

Is this the first time the contest is organised?

This is the second time the AI Song Contest is organised. The contest was first initiated in 2020 by Dutch public broadcaster VPRO together with NPO Innovation and NPO 3FM. Teams from Europe and Australia tried to create a Eurovision kind of song with the help of AI. Team Uncanny Valley from Australia won the first edition with their song Beautiful the World. The 2021 edition is organised independently.

What is the definition of artificial intelligence in this contest?

Artificial intelligence is a very broad concept. For this contest it will mean that teams can use techniques such as -but not limited to- machine learning, such as deep learning, natural language processing, algorithmic composition or combining rule-based approaches with neural networks for the creation of their songs. Teams can create their own AI tools, or use existing models and algorithms.  

What are possible challenges?

Read here about the challenges teams from last year’s contest faced.

As an AI researcher, can I collaborate with musicians?

Yes – this is strongly encouraged!

For the 2020 edition, all songs had to be Eurovision-style. Is that also the intention for 2021 entries?

Last year, the first year the contest was organized, it was indeed all about Eurovision. For this year’s competition, we are trying to expand geographically, culturally, and musically. Teams from all over the world can compete, and songs in all genres can be submitted.

If you’re not familiar with Eurovision-style, you can find a compilation video with brief excerpts from the 26 finalists for Eurovision 2021 here (Bill Young’s May 23, 2021 posting on tellyspotting.kera.org; the video runs under 10 mins.). There’s also the “Eurovision Song Contest: The Story of Fire Saga” 2020 movie starring Rachel McAdams, Will Ferrell, and Dan Stevens. It’s intended as a gentle parody but the style is all there.

ART MACHINES 2: International Symposium on Machine Learning and Art 2021

The symposium, Art Machines 2, started yesterday (June 10, 2021 and runs to June 14, 2021) in Hong Kong and SFU’s Metacreation Lab will be represented (from the Spring 2021 newsletter received via email),

On Sunday, June 13 [2021] at 21:45 Hong Kong Standard Time (UTC +8) as part of the Sound Art Paper Session chaired by Ryo Ikeshiro, the Metacreation Lab’s Mahsoo Salimi and Philippe Pasquier will present their paper, Exploiting Swarm Aesthetics in Sound Art. We’ve included a more detailed preview of the paper in this newsletter below.

Concurrent with ART MACHINES 2 is the launch of two exhibitions – Constructing Contexts and System Dreams. Constructing Contexts, curated by Tobias Klein and Rodrigo Guzman-Serrano, will bring together 27 works with unique approaches to the question of contexts as applied by generative adversarial networks. System Dreams highlights work from the latest MFA talent from the School of Creative Media. While the exhibitions take place in Hong Kong, the participating artists and artwork are well documented online.

Liminal Tones: Swarm Aesthetics in Sound Art

Applications of swarm aesthetics in music composition are not new and have already resulted in volumes of complex soundscapes and musical compositions. Using an experimental approach, Mahsoo Salimi and Philippe Pasquier create a series of sound textures know as Liminal Tones (B/ Rain Dream) based on swarming behaviours

Findings of the Liminal Tones project will be presented in papers for the Art Machines 2: International Symposium on Machine Learning (June 10-14 [2021]) and the International Conference on Swarm Intelligence (July 17-21 [2021]).

Talk about Creative AI at the University of British Columbia

This is the last item I’m excerpting from the newsletter. (Should you be curious about what else is listed, you can go to the Metacreation Lab’s contact page and sign up for the newsletter there.) On June 22, 2021 at 2:00 PM PDT, there will be this event,

Creative AI: on the partial or complete automation of creative tasks @ CAIDA

Philippe Pasquier will be giving a talk on creative applications of AI at CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action. Overviewing the state of the art of computer-assisted creativity and embedded systems and their various applications, the talk will survey the design, deployment, and evaluation of generative systems.

Free registration for the talk is available at the link below.

Register for Creative AI @ CAIDA

Remember, if you want to see the rest of the newsletter, you can sign up at the Metacreation Lab’s contact page.

Artificial Intelligence (AI), musical creativity conference, art creation, ISEA 2020 (Why Sentience?) recap, and more

I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).

29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021

This first excerpt features a conference that’s currently taking place,,

Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]

As part of the International Joint Conference on Artificial Intelligence (IJCAI – PRICAI 2020, January 7-15), Philippe Pasquier will lead a tutorial on Musical Metacreation. This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges.

The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.

Register for IJCAI – PRICAI 2020

The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.

Two calls for papers

These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,

2nd Conference on AI Music Creativity (MuMe + CSMC)

The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.

The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.

AIMC 2021 Info & CFP

AIART  2021

The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.

AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:

AIART 2021 Info & CFP

Toying with music

SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,

MMM : Multi-Track Music Machine

One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.

To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.

Explore MMM: Multi-Track Music Machine

Why Sentience?

Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,

ISEA2020 Recap // Why Sentience? 

As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.

ISEA2020 // Why Sentience? Recap Video

The Metacreation Lab’s researchers explored some of these guiding questions at ISEA2020 with two papers presented at the symposium: Chatterbox: an interactive system of gibberish agents and Liminal Scape, An Interactive Visual Installation with Expressive AI. These papers, and the full proceedings from ISEA2020 can now be accessed below. 

ISEA2020 Proceedings

The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.

The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.