Tag Archives: Philippe Pasquier

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

Night of ideas/Nuit des idées 2022: (Re)building Together on January 27, 2022 (7th edition in Canada)

Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).

Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,

Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.

“(Re)building together

For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.

According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.

The French embassy in Canada (Ambassade de France au Canada) has a Night of Ideas/Nuit des idées 2022 webpage listing the Canadian events (Note: The times are local, e.g., 5 pm in Ottawa),

Ottawa: (Re)building through the arts, together

Moncton: (Re)building Together: How should we (re)think and (re)habilitate the post-COVID world?

Halifax: (Re)building together: Climate change — Building bridges between the present and future

Toronto: A World in Common

Edmonton: Introduction of the neutral pronoun “iel” — Can language influence the construction of identity?

Vancouver: (Re)building together with NFTs

Victoria: Committing in a time of uncertainty

Here’s a little more about the Vancouver event, from the Night of Ideas/Nuit des idées 2022 webpage,

Vancouver: (Re)building together with NFTs [non-fungible tokens]

NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.

Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.

This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.

- 7 p.m on Zoom (registration here) Event broadcast online on France Canada Culture’s Facebook. In English.

Not all of the events are in both languages.

One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

Art, sound, AI, & the Metacreation Lab’s Spring 2021 newsletter

The Metacreation Lab’s Spring 2021 newsletter (received via email) features a number of events either currently taking place or about to take place.

2021 AI Song Contest

2021 marks the 2nd year for this international event, an artificial intelligence/AI Song Contest 2021. The folks at Simon Fraser University’s (SFU) Metacreation Lab have an entry for the 2021 event, A song about the weekend (and you can do whatever you want). Should you click on the song entry, you will find an audio file, a survey/vote consisting of four questions and, if you keep scrolling down, more information about the creative, team, the song and more,

Driven by collaborations involving scientists, experts in artificial intelligence, cognitive sciences, designers, and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, whether these are embedded in interactive experiences or automating workflows integrated into cutting-edge creative software.

Team:

Cale Plut (Composer and musician) is a PhD Student in the Metacreation lab, researching AI music applications in video games.

Philippe Pasquier (Producer and supervisor) is an Associate Professor, and leads the Metacreation Lab. 

Jeff Ens (AI programmer) is a PhD Candidate in the Metacreation lab, researching AI models for music generation.

Renaud Tchemeube (Producer and interaction designer) is a PhD Student in the Metacreation Lab, researching interaction software design for creativity.

Tara Jadidi (Research Assistant) is an undergraduate student at FUM, Iran, working with the Metacreation lab.

Dimiter Zlatkov (Research Assistant) is an undergraduate student at UBC, working with the Metacreation lab.

ABOUT THE SONG

A song about the weekend (and you can do whatever you want) explores the relationships between AI, humans, labour, and creation in a lighthearted and fun song. It is co-created with the Multi-track Music Machine (MMM)

Through the history of automation and industrialization, the relationship between the labour magnification power of automation and the recipients of the benefits of that magnification have been in contention. While increasing levels of automation are often accompanied by promises of future leisure increases, this rarely materializes for the workers whose labour is multiplied. By primarily using automated methods to create a “fun” song about leisure, we highlight both the promise of AI-human cooperation as well as the disparities in its real-world deployment. 

As for the competition itself, here’s more from the FAQs (frequently asked questions),

What is the AI Song Contest?

AI Song Contest is an international creative AI contest. Teams from all over the world try to create a 4-minute pop song with the help of artificial intelligence.

When and where does it take place?

Between June 1, 2021 and July 1, 2021 voting is open for the international public. On July 6 there will be multiple online panel sessions, and the winner of the AI Song Contest 2021 will be announced in an online award ceremony. All sessions on July 6 are organised in collaboration with Wallifornia MusicTech.

How is the winner determined?

Each participating team will be awarded two sets of points: one a public vote by the contest’s international audience, the other the determination of an expert jury.

Anyone can evaluate as many songs as they like: from one, up to all thirty-eight. Every song can be evaluated only once. Even though it won’t count in the grand total, lyrics can be evaluated too; we do like to determine which team wrote the best accoring to the audience.

Can I vote multiple times for the same team?

No, votes are controlled by IP address. So only one of your votes will count.

Is this the first time the contest is organised?

This is the second time the AI Song Contest is organised. The contest was first initiated in 2020 by Dutch public broadcaster VPRO together with NPO Innovation and NPO 3FM. Teams from Europe and Australia tried to create a Eurovision kind of song with the help of AI. Team Uncanny Valley from Australia won the first edition with their song Beautiful the World. The 2021 edition is organised independently.

What is the definition of artificial intelligence in this contest?

Artificial intelligence is a very broad concept. For this contest it will mean that teams can use techniques such as -but not limited to- machine learning, such as deep learning, natural language processing, algorithmic composition or combining rule-based approaches with neural networks for the creation of their songs. Teams can create their own AI tools, or use existing models and algorithms.  

What are possible challenges?

Read here about the challenges teams from last year’s contest faced.

As an AI researcher, can I collaborate with musicians?

Yes – this is strongly encouraged!

For the 2020 edition, all songs had to be Eurovision-style. Is that also the intention for 2021 entries?

Last year, the first year the contest was organized, it was indeed all about Eurovision. For this year’s competition, we are trying to expand geographically, culturally, and musically. Teams from all over the world can compete, and songs in all genres can be submitted.

If you’re not familiar with Eurovision-style, you can find a compilation video with brief excerpts from the 26 finalists for Eurovision 2021 here (Bill Young’s May 23, 2021 posting on tellyspotting.kera.org; the video runs under 10 mins.). There’s also the “Eurovision Song Contest: The Story of Fire Saga” 2020 movie starring Rachel McAdams, Will Ferrell, and Dan Stevens. It’s intended as a gentle parody but the style is all there.

ART MACHINES 2: International Symposium on Machine Learning and Art 2021

The symposium, Art Machines 2, started yesterday (June 10, 2021 and runs to June 14, 2021) in Hong Kong and SFU’s Metacreation Lab will be represented (from the Spring 2021 newsletter received via email),

On Sunday, June 13 [2021] at 21:45 Hong Kong Standard Time (UTC +8) as part of the Sound Art Paper Session chaired by Ryo Ikeshiro, the Metacreation Lab’s Mahsoo Salimi and Philippe Pasquier will present their paper, Exploiting Swarm Aesthetics in Sound Art. We’ve included a more detailed preview of the paper in this newsletter below.

Concurrent with ART MACHINES 2 is the launch of two exhibitions – Constructing Contexts and System Dreams. Constructing Contexts, curated by Tobias Klein and Rodrigo Guzman-Serrano, will bring together 27 works with unique approaches to the question of contexts as applied by generative adversarial networks. System Dreams highlights work from the latest MFA talent from the School of Creative Media. While the exhibitions take place in Hong Kong, the participating artists and artwork are well documented online.

Liminal Tones: Swarm Aesthetics in Sound Art

Applications of swarm aesthetics in music composition are not new and have already resulted in volumes of complex soundscapes and musical compositions. Using an experimental approach, Mahsoo Salimi and Philippe Pasquier create a series of sound textures know as Liminal Tones (B/ Rain Dream) based on swarming behaviours

Findings of the Liminal Tones project will be presented in papers for the Art Machines 2: International Symposium on Machine Learning (June 10-14 [2021]) and the International Conference on Swarm Intelligence (July 17-21 [2021]).

Talk about Creative AI at the University of British Columbia

This is the last item I’m excerpting from the newsletter. (Should you be curious about what else is listed, you can go to the Metacreation Lab’s contact page and sign up for the newsletter there.) On June 22, 2021 at 2:00 PM PDT, there will be this event,

Creative AI: on the partial or complete automation of creative tasks @ CAIDA

Philippe Pasquier will be giving a talk on creative applications of AI at CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action. Overviewing the state of the art of computer-assisted creativity and embedded systems and their various applications, the talk will survey the design, deployment, and evaluation of generative systems.

Free registration for the talk is available at the link below.

Register for Creative AI @ CAIDA

Remember, if you want to see the rest of the newsletter, you can sign up at the Metacreation Lab’s contact page.

Artificial Intelligence (AI), musical creativity conference, art creation, ISEA 2020 (Why Sentience?) recap, and more

I have a number of items from Simon Fraser University’s (SFU) Metacreation Lab January 2021 newsletter (received via email on Jan. 5, 2020).

29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence! or IJCAI-PRICAI2020 being held on Jan. 7 – 15, 2021

This first excerpt features a conference that’s currently taking place,,

Musical Metacreation Tutorial at IIJCAI – PRICAI 2020 [Yes, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence or IJCAI-PRICAI2020 is being held in 2021!]

As part of the International Joint Conference on Artificial Intelligence (IJCAI – PRICAI 2020, January 7-15), Philippe Pasquier will lead a tutorial on Musical Metacreation. This tutorial aims at introducing the field of musical metacreation and its current developments, promises, and challenges.

The tutorial will be held this Friday, January 8th, from 9 am to 12:20 pm JST ([JST = Japanese Standard Time] 12 am to 3:20 am UTC [or 4 pm – 7:30 pm PST]) and a full description of the syllabus can be found here. For details about registration for the conference and tutorials, click below.

Register for IJCAI – PRICAI 2020

The conference will be held at a virtual venue created by Virtual Chair on the gather.town platform, which offers the spontaneity of mingling with colleagues from all over the world while in the comfort of your home. The platform will allow attendees to customize avatars to fit their mood, enjoy a virtual traditional Japanese village, take part in plenary talks and more.

Two calls for papers

These two excerpts from SFU’s Metacreation Lab January 2021 newsletter feature one upcoming conference and an upcoming workshop, both with calls for papers,

2nd Conference on AI Music Creativity (MuMe + CSMC)

The second Conference on AI Music Creativity brings together two overlapping research forums: The Computer Simulation of Music Creativity Conference (est. 2016) and The International Workshop on Musical Metacreation (est. 2012). The objective of the conference is to bring together scholars and artists interested in the emulation and extension of musical creativity through computational means and to provide them with an interdisciplinary platform in which to present and discuss their work in scientific and artistic contexts.

The 2021 Conference on AI Music Creativity will be hosted by the Institute of Electronic Music and Acoustics (IEM) of the University of Music and Performing Arts of Graz, Austria and held online. The five-day program will feature paper presentations, concerts, panel discussions, workshops, tutorials, sound installations and two keynotes.

AIMC 2021 Info & CFP

AIART  2021

The 3rd IEEE Workshop on Artificial Intelligence for Art Creation (AIART) workshop has been announced for 2021. to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of enabling creation, analysis and understanding technologies. The theme topic of the workshop will be AI creativity, and will be accompanied by a Special Issue of the renowned SCI journal.

AIART is inviting high-quality papers presenting or addressing issues related to AI art, in a wide range of topics. The submission due date is January 31, 2021, and you can learn about the wide range of topics accepted below:

AIART 2021 Info & CFP

Toying with music

SFU’s Metacreation Lab January 2021 newsletter also features a kind of musical toy,

MMM : Multi-Track Music Machine

One of the latest projects at the Metacreation Lab is MMM: a generative music generation system based on Transformer architecture, capable of generating multi-track music, developed by Jeff Enns and Philippe Pasquier.

Based on an auto-regressive model, the system is capable of generating music from scratch using a wide range of preset instruments. Inputs from one or several tracks can condition the generation of new tracks, resampling MIDI input from the user or adding further layers of music.

To learn more about the system and see it in action, click below and watch the demonstration video, hear some examples, or try the program yourself through Google Colab.

Explore MMM: Multi-Track Music Machine

Why Sentience?

Finally, for anyone who was wondering what happened at the 2020 International Symposium of Electronic Arts (ISEA 2020) held virtually in Montreal in the fall, here’s some news from SFU’s Metacreation Lab January 2021 newsletter,

ISEA2020 Recap // Why Sentience? 

As we look back at one of the most unprecedented years, some of the questions explored at ISEA2020 are more salient now than ever. This recap video highlights some of the most memorable moments from last year’s virtual symposium.

ISEA2020 // Why Sentience? Recap Video

The Metacreation Lab’s researchers explored some of these guiding questions at ISEA2020 with two papers presented at the symposium: Chatterbox: an interactive system of gibberish agents and Liminal Scape, An Interactive Visual Installation with Expressive AI. These papers, and the full proceedings from ISEA2020 can now be accessed below. 

ISEA2020 Proceedings

The video is a slick, flashy, and fun 15 minutes or so. In addition to the recap for ISEA 2020, there’s a plug for ISEA 2022 in Barcelona, Spain.

The proceedings took my system a while to download (there are approximately 700 pp.). By the way, here’s another link to the proceedings or rather to the archives for the 2020 and previous years’ ISEA proceedings.

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

Next Horizons: Electronic Literature Organization (ELO) 2016
 conference in Victoria, BC

The Electronic Literature Organization (ELO; based at the Massachusetts Institute of Technology [MIT]) is holding its annual conference themed Next Horizons (from an Oct. 12, 2015 post on the ELO blog) at the University of Victoria on Vancouver Island, British Columbia from June 10 – June 12, 2016.

You can get a better sense of what it’s all about by looking at the conference schedule/programme,

Friday, June 10, 2016

8:00 a.m.–5:00 p.m.: Registration
MacLaurin Lobby A100

8:00 a.m.-10:00 a.m: Breakfast
Sponsored by Bloomsbury Academic

10:00 a.m.-10:30: Welcome
MacLaurin David Lam Auditorium A 144
Speakers: Dene Grigar & Ray Siemens

10:30-12 noon: Featured Papers
MacLaurin David Lam Auditorium A 144
Chair: Alexandra Saum-Pascual, UC Berkeley

  • Stuart Moulthrop, “Intimate Mechanics: Play and Meaning in the Middle of Electronic Literature”
  • Anastasia Salter, “Code before Content? Brogrammer Culture in Games and Electronic Literature”

12 Noon-1:45 p.m.  Gallery Opening & Lunch Reception
MacLaurin Lobby A 100
Kick off event in celebration of e-lit works
A complete list of artists featured in the Exhibit

1:45-3:00: Keynote Session
MacLaurin David Lam Auditorium A 144
“Prototyping Resistance: Wargame Narrative and Inclusive Feminist Discourse”

  • Jon Saklofske, Acadia University
  • Anastasia Salter, University of Central Florida
  • Liz Losh, College of William and Mary
  • Diane Jakacki, Bucknell University
  • Stephanie Boluk, UC Davis

3:00-3:15: Break

3:15-4:45: Concurrent Session 1

Session 1.1: Best Practices for Archiving E-Lit
MacLaurin D010
Roundtable
Chair: Dene Grigar, Washington State University Vancouver

  • Dene Grigar, Washington State University Vancouver
  • Stuart Moulthrop, University of Wisconsin Milwaukee
  • Matthew Kirschenbaum, University of Maryland College Park
  • Judy Malloy, Independent Artist

Session 1.2: Medium & Meaning
MacLaurin D110
Chair: Rui Torres, University Fernando Pessoa

  • “From eLit to pLit,” Heiko Zimmerman, University of Trier
  • “Generations of Meaning,” Hannah Ackermans, Utrecht University
  • “Co-Designing DUST,” Kari Kraus, University of Maryland College Park

Session 1.3: A Critical Look at E-Lit
MacLaurin D105
Chair: Philippe Brand, Lewis & Clark College

  • “Methods of Interrogation,” John Murray, University of California Santa Cruz
  • “Peering through the Window,” Philippe Brand, Lewis & Clark College
  • “(E-)re-writing Well-Known Works,” Agnieszka Przybyszewska, University of Lodz

Session 1.4: Literary Games
MacLaurin D109
Chair: Alex Mitchell, National University of Singapore

  • “Twine Games,” Alanna Bartolini, UC Santa Barbara
  • “Whose Game Is It Anyway?,” Ryan House, Washington State University Vancouver
  • “Micronarratives Dynamics in the Structure of an Open-World Action-Adventure Game,” Natalie Funk, Simon Fraser University

Session 1.5: eLit and the (Next) Future of Cinema
MacLaurin D107
Roundtable
Chair: Steven Wingate, South Dakota State University

  • Steve Wingate, South Dakota State University
  • Kate Armstrong, Emily Carr University
  • Samantha Gorman, USC

Session 1.6: Authors & Texts
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “Generative Poems by Maria Mencia,” Angelica Huizar, Old Dominion University
  • “Inhabitation: Johanna Drucker: “no file is ever self-identical,” Joel Kateinikoff, University of Alberta
  • “The Great Monster: Ulises Carrión as E-Lit Theorist,” Élika Ortega, University of Kansas
  • “Pedagogic Strategies for Electronic Literature,” Mia Zamora, Kean University

3:15-4:45: Action Session Day 1
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Scott Rettberg and Álvaro Seiça, University of Bergen; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

5:00-6:00: Reception and Poster Session
University of Victoria Faculty Club
For ELO, DHSI, & INKE Participants, featuring these artists and scholars from the ELO:

  • “Social Media for E-Lit Authors,” Michael Rabby, Washington State University Vancouver
  • “– O True Apothecary!, by Kyle Booten,” UC Berkeley, Center for New Media
  • “Life Experience through Digital Simulation Narratives,” David Núñez Ruiz, Neotipo
  • “Building Stories,” Kate Palermini, Washington State University Vancouver
  • “Help Wanted and Skills Offered,” by Deena Larsen, Independent Artist; Julianne Chatelain, U.S. Bureau of Reclamation
  • “Beyond Original E-Lit: Deconstructing Austen Cybertexts,” Meredith Dabek, Maynooth University
  • Arabic E-Lit. (AEL) Project, Riham Hosny, Rochester Institute of Technology/Minia University
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Meta for Meta’s Sake,” Melinda White

 

7:30-11:00: Readings & Performances at Felicita’s
A complete list of artists featured in the event

Saturday, June 11, 2016

 

8:30-10:00: Lightning Round
MacLaurin David Lam Auditorium A 144
Chair: James O’Sullivan, University of Sheffield

  • “Different Tools but Similar Wits,” Guangxu Zhao, University of Ottawa
  • “Digital Aesthetics,” Bertrand Gervais, Université du Québec à Montréal
  • “Hatsune Miku,” Roman Kalinovski, Independent Scholar
  • “Meta for Meta’s Sake,” Melinda White, University of New Hampshire
  • “Narrative Texture,” Luciane Maria Fadel, Simon Fraser University
  • “Natural Language Generation,” by Stefan Muller Arisona
  • “Poetic Machines,” Sidse Rubens LeFevre, University of Copenhagen
  • “Really Really Long Works,” Aden Evens, Dartmouth University
  • “UnWrapping the E-Reader,” David Roh, University of Utah
  • “Social Media for E-Lit Artists,” Michael Rabby

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 noon: Concurrent Session 2

Session 2.1: Literary Interventions
MacLaurin D101
Brian Ganter, Capilano College

  • “Glitching the Poem,” Aaron Angello, University of Colorado Boulder
  • “WALLPAPER,” Alice Bell, Sheffield Hallam University; Astrid Ensslin, University of Alberta
  • “Unprintable Books,” Kate Pullinger [emphasis mine], Bath Spa University

Session 2.2: Theoretical Underpinnings
MacLaurin D105
Chair: Mia Zamora, Kean University

  • “Transmediation,” Kedrick James, University of British Columbia; Ernesto Pena, University of British Columbia
  • “The Closed World, Databased Narrative, and Network Effect,” Mark Sample, Davidson College
  • “The Cyborg of the House,” Maria Goicoechea, Universidad Complutense de Madrid

Session 2.3: E-Lit in Time and Space
MacLaurin D107
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Electronic Literary Artifacts,” John Barber, Washington State University Vancouver; Alcina Cortez, INET-MD, Instituto de Etnomusicologia, Música e Dança
  • “The Old in the Arms of the New,” Gary Barwin, Independent Scholar
  • “Space as a Meaningful Dimension,” Luciane Maria Fadel, Simon Fraser University

Session 2.4: Understanding Bots
MacLaurin D110
Roundtable
Chair: Leonardo Flores, University of Puerto Rico, Mayagüez

  • Allison Parrish, Fordham University
  • Matt Schneider, University of Toronto
  • Tobi Hahn, Paisley Games
  • Zach Whalen, University of Mary Washington

10:30-12 noon: Action Session Day 2
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:15: Artists Talks & Lunch
David Lam Auditorium MacLaurin A144

  • “The Listeners,” by John Cayley
  • “The ChessBard and 3D Poetry Project as Translational Ecosystems,” Aaron Tucker, Ryerson University
  • “News Wheel,” Jody Zellen, Independent Artist
  • “x-o-x-o-x.com,” Erik Zepka, Independent Artist

1:30-3:00: Concurrent Session 3

Session 3.1: E-Lit Pedagogy in Global Setting
MacLaurin D111
Roundtable
Co-Chairs: Philippe Bootz, Université Paris 8; Riham Hosny, Rochester Institute of Technology/Minia University

  • Sandy Baldwin, Rochester Institute of Technology
  • Maria Goicoechea, Universidad Complutense de Madrid
  • Odile Farge, UNESCO Chair ITEN, Foundation MSH/University of Paris8.

Session 3.2: The Art of Computational Media
MacLaurin D109
Chair: Rui Torres, University Fernando Pessoa

  • “Creative GREP Works,” Kristopher Purzycki, University of Wisconsin Milwaukee
  • “Using Theme to Author Hypertext Fiction,” Alex Mitchell, National University at Singapore

Session 3.3: Present Future Past
MacLaurin D110
Chair: David Roh, University of Utah

  • “Exploring Potentiality,” Daniela Côrtes Maduro, Universität Bremen
  • “Programming the Kafkaesque Mechanism,” by Kristof Anetta, Slovak Academy of Sciences
  • “Reapprasing Word Processing,” Matthew Kirschenbaum, University of Maryland College Park

Session 3.4: Beyond Collaborative Horizons
MacLaurin D010
Panel
Chair: Jeremy Douglass, UC Santa Barbara

  • Jeremy Douglass, UC Santa Barbara
  • Mark Marino, USC
  • Jessica Pressman, San Diego State University

Session 3.5: E-Loops: Reshuffling Reading & Writing In Electronic Literature Works
MacLaurin D105
Panel
Chair: Gwen Le Cor, Université Paris 8

  • “The Plastic Space of E-loops and Loopholes: the Figural Dynamics of Reading,” Gwen Le Cor, Université Paris 8
  • “Beyond the Cybernetic Loop: Redrawing the Boundaries of E-Lit Translation,” Arnaud Regnauld, Université Paris 8
  • “E-Loops: The Possible and Variable Figure of a Contemporary Aesthetic,” Ariane Savoie, Université du Québec à Montréal and Université Catholique de Louvain
  • “Relocating the Digital,” Stéphane Vanderhaeghe, Université Paris 8

Session 3.6: Metaphorical Perspectives
MacLaurin D107
Chair: Alexandra Saum-Pascual, UC Berkeley

  • “Street Ghosts,” Ali Rachel Pearl, USC
  • “The (Wo)men’s Social Club,” Amber Strother, Washington State University Vancouver
Session 3.7: Embracing Bots
MacLaurin D101

Roundtable
Zach Whalen, Chair

  • Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • Chris Rodley, University of Sydney
  • Élika Ortega, University of Kansas
  • Katie Rose Pipkin, Carnegie Mellon

1:30-3:30: Workshops
MacLaurin D115

  • “Bots,” Zach Whalen, University of Mary Washington
  • “Twine”
  • “AR/VR,” John Murray, UC Santa Cruz
  • “Unity 3D,” Stefan Muller Arisona, University of Applied Sciences and Arts Northwestern; Simon Schubiger, University of Applied Sciences and Arts Northwestern
  • “Exploratory Programming,” Nick Montfort, MIT
  • “Scalar,” Hannah Ackermans, University of Utrecht
  • The Electronic Poet’s Workbench: Build a Generative Writing Practice, Andrew Koblucar, New Jersey Institute of Technology; David Ayre, Programmer and Independent Artist

3:30-5:00: Keynote

Christine Wilks [emphasis mine], “Interactive Narrative and the Art of Steering Through Possible Worlds”
MacLaurin David Lam Auditorium A144

Wilks is British digital writer, artist and developer of playable stories. Her digital fiction, Underbelly, won the New Media Writing Prize 2010 and the MaMSIE Digital Media Competition 2011. Her work is published in online journals and anthologies, including the Electronic Literature Collection, Volume 2 and the ELMCIP Anthology of European Electronic Literature, and has been presented at international festivals, exhibitions and conferences. She is currently doing a practice-based PhD in Digital Writing at Bath Spa University and is also Creative Director of e-learning specialists, Make It Happen.

5:15-6:45: Screenings at Cinecenta
A complete list of artists featured in the Screenings

7:00-9:00: Banquet (a dance follows)
University of Victoria Faculty Club

Sunday, June 12, 2016

 

8:30-10:00: Town Hall
MacLaurin David Lam Auditorium D144

10:00: Gallery exhibit opens
MacLaurin A100
A complete list of artists featured in the Exhibit

10:30-12 p.m.: Concurrent Session 4

Session 4.1: Narratives & Narrativity
MacLaurin D110
Chair: Kendrick James, University of British Columbia

  • “Narrativity in Virtual Reality,” Illya Szilak, Independent Scholar
  • “Simulation Studies,” David Ciccoricco, University of Otago
  • “Future Fiction Storytelling Machines,” Caitlin Fisher, York University

Session 4.2: Historical & Critical Perspectives
MacLaurin D101
Chair: Robert Glick, Rochester Institute of Technology

  • “The Evolution of E-Lit,” James O’Sullivan, University of Sheffield
  • “The Logic of Selection,” by Matti Kangaskoski, Helsinki University

Session 4.3: Emergent Media
MacLaurin D107
Alexandra Saum-Pascual, UC Berkeley

  • Seasons II:  a case study in Ambient Video, Generative Art, and Audiovisual Experience,” Jim Bizzocchi, Simon Fraser University; Arne Eigenfeldt, Simon Fraser University; Philippe Pasquier, Simon Fraser University; Miles Thorogood, Simon Fraser University
  • “Cinematic Turns,” Liz Losh, College of William and Mary
  • “Mario Mods and Ludic Seriality,” Shane Denson, Duke University

Session 4.4: The E-Literary Object
MacLaurin D109
Chair: Deena Larsen, Independent Artist

  • “How E-Literary Is My E-Literature?,” by Leonardo Flores, University of Puerto Rico Mayagüez Campus
  • “Overcoming the Locative Interface Fallacy,” by Lauren Burr, University of Waterloo
  • “Interactive Narratives on the Block,” Aynur Kadir, Simon Fraser University

Session 4.5: Next Narrative
MacLaurin D010
Panel
Chair: Marjorie Luesebrink

  • Marjorie Luesebrink, Independent Artist
  • Daniel Punday, Independent Artist
  • Will Luers, Washington State University Vancouver

10:30-12 p.m.: Action Session Day 3
MacLaurin D111

  • Digital Preservation, by Nicholas Schiller, Washington State University Vancouver; Zach Coble, NYU
  • ELMCIP, Allison Parrish, Fordham University; Scott Rettberg, University of Bergen; David Nunez Ruiz, Neotipo; Hannah Ackermans, Utrecht University
  • Wikipedia-A-Thon, Liz Losh, College of William and Mary

12:15-1:30: Artists Talks & Lunch
David Lam Auditorium A144

  • “Just for the Cameras,” Flourish Klink, Independent Artist
  • “Lulu Sweet,” Deanne Achong and Faith Moosang, Independent Artists
  • “Drone Pilot,” Ian Hatcher, Independent Artist
  • “AVATAR/MOCAP,” Alan Sondheim, Independent Artist

1:30-3:00 : Concurrent Session 5

Session 5.1: Subversive Texts
MacLaurin D101
Chair: Michael Rabby, Washington State University Vancouver

  • “E-Lit Jazz,” Sandy Baldwin, Rochester Institute of Technology; Rui Torres, University Fernando Pessoa
  • “Pop Subversion in Electronic Literature,” Davin Heckman, Winona State University
  • “E-Lit in Arabic Universities,” Riham Hosny, Rochester Institute of Technology/Minia University

Session 5.2: Experiments in #NetProv & Participatory Narratives
MacLaurin D109
Roundtable
Chair: Mia Zamora, Kean University

  • Mark Marino, USC
  • Rob Wittig, Meanwhile… Netprov Studio
  • Mia Zamora, Kean University

Session 5.3: Emergent Media
MacLaurin D105
Chair: Andrew Klobucar, New Jersey Institute of Technology

  • “Migrating Electronic Literature to the Kinect System,” Monika Gorska-Olesinka, University of Opole
  • “Mobile and Tactile Screens as Venues for the Performing Arts?,” Serge Bouchardon, Sorbonne Universités, Université de Technologie de Compiègne
  • “The Unquantified Self: Imagining Ethopoiesis in the Cognitive Era,” Andrew Klobucar, New Jersey Institute of Technology

Session 5.4: E-Lit Labs
MacLaurin D010
Chair: Jim Brown, Rutgers University Camden

  • Jim Brown, Rutgers University Camden
  • Robert Emmons, Rutgers University Camden
  • Brian Greenspan, Carleton University
  • Stephanie Boluk, UC Davis
  • Patrick LeMieux, UC Davis

Session 5.5: Transmedia Publishing
MacLaurin D107
Roundtable
Chair: Philippe Bootz

  • Philippe Bootz, Université Paris 8
  • Lucile Haute, Université Paris 8
  • Nolwenn Trehondart, Université Paris 8
  • Steve Wingate, South Dakota State University

Session 5.6: Feminist Horizons
MacLaurin D110
Panel
Moderator: Anastasia Salter, University of Central Florida

  • Kathi Inman Berens, Portland State University
  • Jessica Pressman, San Diego State University
  • Caitlin Fisher, York University

3:30-5:00: Closing Session
David Lam Auditorium MacLaurin A144
Chairs: John Cayley, Brown University; Dene Grigar, President, ELO

  • “Platforms and Genres of Electronic Literature,” Scott Rettberg, University of Bergen
  • “Emergent Story Structures,” David Meurer. York University
  • “We Must Go Deeper,” Samantha Gorman, USC; Milan Koerner-Safrata, Recon Instruments

I’ve bolded two names: Christine Wilks, one of two conference keynote speakers, who completed her MA in the same cohort as mine in De Montfort University’s Creative Writing and New Media master’s program. Congratulations on being a keynote speaker, Christine! The other name belongs to Kate Pullinger who was one of two readers for that same MA programme. Since those days, Pullinger has won a Governor General’s award for her fiction, “The Mistress of Nothing,” and become a professor at the University of Bath Spa (UK).

Registration appears to be open.