An October 16, 2023 notice (received via email) from Toronto’s ArtSci Salon makes this performance announcement,
PATTERNS FROM NATURE Saturday, November 4th, 2023 Isabel Bader Theatre 93 Charles St West Toronto 8pm Free admission
An ambitious new project is set to captivate audiences with a mesmerizing fusion of physics, film, and music.
Physicist Stephen Morris, filmmakers Udo Prinsen, Gita Blak, Lee Hutzulak, Tina de Groot, and composer/saxophonist Quinsin Nachoff are joining forces to explore Morris’ area of research Emergent Patterns in Nature through a captivating multimedia experience.
The centerpiece of this innovative project is a four-movement, 40-minute work. Each filmmaker will delve into a specific research area within Emergent Patterns in Nature, exploring Branches, Flow, Cracks, and Ripples. Collaborating closely, the team will draw inspiration from one another’s progress into the composition and filmmaking processes. These areas were inspired, in part, by the books about Patterns in Nature by Philip Ball.
Blending jazz and classical elements, the composition will be performed by a chamber ensemble featuring woodwinds, brass, percussion, piano, string quartet, bass, drum set, and conductor. Renowned soloists, including clarinetist François Houle, the Molinari String Quartet, pianists Matt Mitchell and Santiago Leibson, bassist Carlo De Rosa, drummer Satoshi Takeishi, trombonist Ryan Keberle, and saxophonist Quinsin Nachoff, will contribute their virtuosity to the performance.
This groundbreaking collaboration promises to transport audiences on an immersive journey through the wonders of Emergent Patterns in Nature. The result will be an unforgettable multimedia experience that pushes the boundaries of creativity and innovation.
Toronto ensemble: Camille Watts (flute), François Houle (clarinet), Quinsin Nachoff (tenor saxophone), Peter Lutek (bassoon), Jason Logue (trumpet), David Quackenbush (french horn), Ryan Keberle (trombone), Mark Duggan (percussion), Santiago Leibson (piano), Carlo De Rosa (bass), Satoshi Takeishi (drums), Molinari String Quartet (Olga Ranzenhofer – violin I, Antoine Bareil – violin II, Frédéric Lambert – viola, Pierre-Alain Bouvrette – cello), JC Sanford (conductor)
In co-presentation with Art-Sci Salon at the Fields Institute for Research in Mathematical Sciences and the University of Toronto Department of Physics
Made possible, in part, through the generous support of The Canada Council for the Arts, the Fields Institute for Research in Mathematical Sciences and the Conseil des Arts et des Lettres du Québec
Just Register here and/or there’s a roundtable discussion scheduled for the next day,
Let’s talk it up
An October 20, 2023 notice (received via email) from Toronto’s ArtSci Salon makes this related event announcement,
Patterns from Nature is a new project intersecting physics, film, and music. Physicist Stephen Morris, filmmakers Udo Prinsen, Gita Blak, Lee Hutzulak, Tina de Groot, and composer/saxophonist Quinsin Nachoff joined forces to explore Morris’ area of research Emergent Patterns in Nature through a captivating multimedia experience.
The transdisciplinary collaborations that led to this complex multimedia and multidimensional work raises many questions:
Why and in what way did science, film and music come to converge in this work?
What role did each participant play?
How did they interpret the scientific findings informing the project?
What were the inspirations, the concerns, the questions that each participant brought into this work?
Come with your questions and curiosity, share your experience and feedback with the protagonists
For those who can’t get to the Toronto events, there is a recently published (2023) book about an art/physics collaboration, Leaning Out of Windows (see September 11, 2023 post/book review).
Join us to welcome Cerpina and Stenslie as they introduce us to their book and discuss the future cuisine of humanity. To sustain the soon-to-be 9 billion global population we cannot count on Mother Earth’s resources anymore. The project explores innovative and speculative ideas about new foods in the field of arts, design, science & technology, rethinking eating traditions and food taboos, and proposing new recipes for survival in times of ecological catastrophes.
To match the topic of their talk, attendees will be presented with “anthropocene snacks” and will be encouraged to discuss food alternatives and new networks of solidarity to fight food deserts, waste, and unsustainable consumption.
This is a Hybrid event: our guests will join us virtually on zoom. Join us in person at Glendon Campus, rm YH190 (the studio next to the Glendon Theatre) for a more intimate community experience and some anthropocene snacks. If you wish to join us on Zoom, please
This event is part of a series on Emergent Practices in Communication, featuring explorations on interspecies communication and digital networks; land-based justice and collective care. The full program can be found here
This initiative is supported by York University’s Teaching Commons Academic Innovation Fund
Zane Cerpina is a multicultural and interdisciplinary female author, curator, artist, and designer working with the complexity of socio-political and environmental issues in contemporary society and in the age of the Anthropocene. Cerpina earned her master’s degree in design from AHO – The Oslo School of Architecture and Design and a bachelor’s degree in Art and Technology from Aalborg University. She resides in Oslo and is a project manager/curator at TEKS (Trondheim Electronic Arts Centre). She is also a co-founder and editor of EE: Experimental Emerging Art Journal. From 2015 to 2019, Cerpina was a creative manager and editor at PNEK (Production Network for Electronic Art, Norway).
Stahl Stenslie works as an artist, curator and researcher specializing in experimental media art and interaction experiences. His aesthetic focus is on art and artistic expressions that challenge ordinary ways of perceiving the world. Through his practice he asks the questions we tend to avoid – or where the answers lie in the shadows of existence. Keywords of his practice are somaesthetics, unstable media, transgression and numinousness. The technological focus in his works is on the art of the recently possible – such as i) panhaptic communication on Smartphones, ii) somatic and immersive soundspaces, and iii) design of functional and lethal artguns, 3D printed in low-cost plastic material.He has a PhD on Touch and Technologies from The School of Architecture and Design, Oslo, Norway. Currently he heads the R&D department at Arts for Young Audiences Norway.
If you’re interested in the book, there’s the anthropocenecookbook.com, which has more about the book and purchase information,
The Anthropocene Cookbook is by far the most comprehensive collection of ideas about future food from the perspective of art, design, and science. It is a thought-provoking book about art, food, and eating in the Anthropocene –The Age of Man– and the age of catastrophes.
Published by The MIT Press [MIT = Massachusetts Institute of Technology] | mitpress.mit.edu
Supported by TEKS Trondheim Electronic Arts Centre | www.teks.no
*Date changed* Streaming Carbon Footprint on October 27, 2023
Keep scrolling down to Date & location changed for Streaming Carbon Footprint subhead.
From the Toronto ArtSci Salon October 5, 2023 announcement,
Oct 27, [2023] 5:00-7:00 PM [ET] Streaming Carbon Footprint
with Laura U. Marks and David Rokeby – Room 230 The Fields Institute for Research in Mathematical Sciences 222 College Street, Toronto
We are thrilled to announce this dialogue between media Theorist Laura U. Marks and Media Artist David Rokeby. Together, they will discuss a well known elephant in the room of media and digital technologies: their carbon footprint. As social media and streaming media usage increases exponentially, what can be done to mitigate their impact? are there alternatives?
This is a live event: our guests will join us in person.
if you wish to join us on Zoom instead, a link will be circulated on our website and on social media a few days before the event. The event will be recorded
Laura U. Marks works on media art and philosophy with an intercultural focus, and on small-footprint media. She programs experimental media for venues around the world. As Grant Strate University Professor, she teaches in the School for the Contemporary Arts at Simon Fraser University in Vancouver, Canada. Her upcoming book The Fold: From Your Body to the Cosmos will be published I March 2024 by Duke University Press.
David Rokeby is an installation artist based in Toronto, Canada. He has been creating and exhibiting since 1982. For the first part of his career he focussed on interactive pieces that directly engage the human body, or that involve artificial perception systems. In the last decade, his practice has expanded to included video, kinetic and static sculpture. His work has been performed / exhibited in shows across Canada, the United States, Europe and Asia.
Awards include the first BAFTA (British Academy of Film and Television Arts) award for Interactive Art in 2000, a 2002 Governor General’s award in Visual and Media Arts and the Prix Ars Electronica Golden Nica for Interactive Art 2002. He was awarded the first Petro-Canada Award for Media Arts in 1988, the Prix Ars Electronica Award of Distinction for Interactive Art (Austria) in 1991 and 1997.
I haven’t been able to dig up any information about registration but it will be added here should I stumble across any in the next few weeks. I did, however, find more information about Marks’s work and a festival in Vancouver (Canada).
Fourth Annual Small File Media Festival (October 20 -21, 2023) and the Streaming Carbon Footprint
When was the last time you watched a DVD? If you’re like most people, your DVD collection has been gathering dust as you stream movies and TV from a variety of on-demand services. But have you ever considered the impact of streaming video on the environment?
School for the Contemporary Arts professor Laura Marks and engineering professor Stephen Makonin, with engineering student Alejandro Rodriguez-Silva and media scholar Radek Przedpełski, worked together for over a year to investigate the carbon footprint of streaming media supported by a grant from the Social Sciences and Humanities Research Council of Canada.
“Stephen and Alejandro were there to give us a reality check and to increase our engineering literacy, and Radek and I brought the critical reading to it,” says Marks. “It was really a beautiful meeting of critical media studies and engineering.”
After combing through studies on Information and Communication Technologies (ICT) and making their own calculations, they confirmed that streaming media (including video on demand, YouTube, video embedded in social media and websites, video conferences, video calls and games) is responsible for more than one per cent of greenhouse gas emissions worldwide. And this number is only projected to rise as video conferencing and streaming proliferate.
“One per cent doesn’t sound like a lot, but it’s significant if you think that the airline industry is estimated to be 1.9 per cent,” says Marks. “ICT’s carbon footprint is growing fast, and I’m concerned that because we’re all turning our energy to other obvious carbon polluters, like fossil fuels, cars, the airline industry, people are not going to pay attention to this silent, invisible carbon polluter.”
One thing that Marks found surprising during their research is how politicized this topic is.
Their full report includes a section detailing the International Energy Association’s attack on French think tank The Shift Project after they published a report on streaming media’s carbon footprint in 2019. They found that some ICT engineers state that the carbon footprint of streaming is not a concern because data centres and networks are very efficient, while others say the fast-rising footprint is a serious problem that needs to be addressed. Their report includes comparisons of the divergent figures in engineering studies in order to get a better understanding of the scope of this problem.
The No. 1 thing Marks and Makonin recommend to reduce streaming’s carbon footprint is to ensure that our electricity comes from renewable sources. At an individual level, they offer a list of recommendations to reduce energy consumption and demand for new ICT infrastructure including: stream less, watch physical media including DVDs, decrease video resolution, use audio-only mode when possible, and keep your devices longer—since production of devices is very carbon-intensive.
Promoting small files and low resolution, Marks founded the Small File Media Festival [link leads to 2023 programme], which will present its second annual program [2021] of 5-megabyte films Aug. 10 – 20. As the organizers say, movies don’t have to be big to be binge-worthy.
And now for 2023, here’s a video promoting the upcoming fourth annual festival,
The Streaming Carbon Footprint webpage on the SFU website includes information about the final report and the latest Small File Media Festival. Although I found the Small File Media Festival website also included a link for purchasing tickets,
The Small File Media Festival returns for its fourth iteration! We are delighted to partner with The Cinematheque to present over sixty jewel-like works from across the globe. These movies are small in file size, but huge in impact: by embracing the aesthetics of compression and low resolution (glitchiness, noise, pixelation), they lay the groundwork for a new experimental film movement in the digital age. This year, six lovingly curated programs traverse brooding pixelated landscapes, textural paradises, and crystalline infinities.
Join us Friday, October 20 [2023] for the opening-night program followed by a drinks reception in the lobby and a dance party in the cinema, featuring music by Vancouver electronic artist SAN. We’ll announce the winner of the coveted Small-File Golden Mini Bear during Saturday’s [October 21, 2023] award ceremony! As always, the festival will stream online at smallfile.ca after the live events.
We’re most grateful to our future-forward friends at the Social Sciences and Humanities Research Council of Canada, Canada Council for the Arts, and SFU Contemporary Arts. Thanks to VIVO Media Arts, Cairo Video Festival, and The Hmm for generous distribution and exhibition awards, and to UKRAïNATV, a partner in small-file activism.
Cosmically healthy, community-building, and punk AF, small-file ecomedia will heal the world, one pixel at a time.
Featuring more than 100 artworks, manuscripts, sound recordings and books, many on display for the first time, Animals: Art, Science and Sound explores how animals have been documented across the world over the last 2,000 years
Season of events includes musicians Cosmo Sheldrake and Cerys Matthews, wildlife photographer Hamza Yassin and ornithologist Mya-Rose Craig, also known as Birdgirl, and more
Complemented by two free displays featuring newly acquired material from animal rights activist Kim Stallwood and award-winning photographer Levon Biss
Animals: Art, Science and Sound(21 April – 28 August 2023) at the British Library reveals how the intersection of science, art and sound has been instrumental in our understanding of the natural world and continues to evolve today.
From an ancient Greek papyrus detailing the mating habits of dogs to the earliest photographs of Antarctic animals and a recording of the last Kauaʻi ʻōʻō songbird, this is the first major exhibition to explore the different ways in which animals have been written about, visualised and recorded.
Journeying through darkness, water, land and air, visitors will encounter striking artworks, handwritten manuscripts, sound recordings and printed publications that speak to contemporary debates around discovery, knowledge, conservation, climate change and extinction. Each zone also includes a bespoke, atmospheric soundscape created using recordings from the Library’s sound archive.
Featuring over 120 exhibits, highlights include:
Earliest known illustrated Arabic scientific work documenting the characteristics of animals alongside their medical uses (c. 1225)
Earliest use of the word ‘shark’ in printed English (1569) on public display for the first time
One of the earliest works on the microscopic world, Micrographia (1665) by Robert Hooke, alongside three insect portraits by photographer Levon Biss (2021) recently acquired by the British Library, which use a combination of microscopy and photography to magnify specimens collected by Charles Darwin in 1836 and Alfred Russell Wallace circa 1859
Leonardo da Vinci’s notes (1500-08) on the impact of wind on a bird in flight, on public display for the first time
One of the rarest ichthyology publications ever produced, The Fresh-Water Fishes of Great Britain (1828-38), with hand painted illustrations by Sarah Bowdich
First commercially published recording of an animal from 1910 titled Actual Bird Record Made by a Captive Nightingale (No. I) by The Gramophone Company Limited
One of the earliest examples of musical notation being used to represent the songs and calls of birds from 1650 by Athanasius Kircher
One of the earliest portable bat detectors, the Holgate Mk VI, used by amateur naturalist John Hooper during the 1960s-70s to capture some of the first sound recordings of British bats
Cam Sharp Jones, Visual Arts Curator at the British Library, said: ‘Animals have fascinated people for as long as human records exist and the desire to study and understand other animals has taken many forms, including textual and artistic works. This exhibition is a great opportunity to showcase some of the earliest textual descriptions of animals ever produced, as well as some of the most beautiful, unique and strange records of animals that are cared for by the British Library.’
Cheryl Tipp, Curator of Wildlife and Environmental Sound at the British Library, said: ‘Sound recording has allowed us to uncover aspects of animal lives that just would not have been possible using textual or visual methods alone. It has been used to reclassify species, locate previously unknown populations and allowed us to eavesdrop on worlds that would otherwise be inaudible to our ears. It is such an emotive medium and I hope visitors will be inspired to explore the Library’s collections, as well as tune in to the sounds of the natural world in their everyday lives.’
[Note: All of the events have taken place.] There is a season of in-person and online events inspired by the exhibition, such as a Late at the Library with musician, composer and producer Cosmo Sheldrake hosted by musician, author and broadcaster Cerys Matthews and Animal Magic: A Night of Wild Enchantment where five speakers, including wildlife cameraman, ornithologist and Strictly Come Dancing winner Hamza Yassin and birder, environmentalist and diversity activist, Mya-Rose Craig, each have 15 minutes to tell a story. There is a family event on Earth Day 22 April where Art Fund’s The Wild Escape epic-scale digital landscape featuring children’s images of animals will be unveiled. A selection of these works are included in an outdoor exhibition around Kings Cross.
A richly illustrated publication by the British Library with interactive QR technology allowing readers to listen to sound recordings and a free trail for families and groups also accompanies the exhibition.
The exhibition is made possible with support from Getty through The Paper Project initiative and PONANT. With thanks to The American Trust for the British Library and The B.H. Breslauer Fund of the American Trust for the British Library. Audio soundscapes created by Greg Green with support from the Unlocking our Sound Heritage project, made possible by the National Lottery Heritage Fund. Scientific advice provided by ZSL (the Zoological Society of London).
Animals: Art, Science and Sound is complemented by two free displays at the British Library. Animal Rights: From the Margins to the Mainstream (7 May – 9 July 2023) in the Treasures Gallery draws on published, handwritten and ephemeral works from the Library’s collection relating to animal welfare. It features newly acquired material collected by animal rights activist Kim Stallwood who will be in conversation at the Library about the history of animal welfare legislation. Microsculpture (12 May – 20 November 2023) showcases nine portraits by photographer Levon Biss that capture the microscopic form and evolutionary adaptions of insects in striking large-format, high-resolution detail.
Animals: Art, Science and Sound draws on the British Library’s role as home to the UK’s national sound archive, one of the largest collections of sound recordings in the world. With over 6.5 million items of speech, music and wildlife, this includes audio from the advent of recording to the present day, and over 70,000 recordings are freely available online at sounds.bl.uk and in the British Library’s Sound Gallery in St Pancras.
Opening on 2 June [2023], Digital Storytelling features publications that use new technologies to reimagine reading experiences
Visitors will discover a range of digital stories, on display together for the first time, including four-time BAFTA nominated 80 Days, an interactive adaptation of Jules Verne’s Around the World in Eighty Days, and the exclusive public preview of Windrush Tales, the world’s first interactive narrative game based on the experiences of Caribbean immigrants in post-war Britain
Also on display will be interactive media providing insights into the lived stories behind historical events, from the 2011 Egyptian uprising in A Dictionary of the Revolution to a moving account of the loss of a relative in the Manchester Arena Bombing in c ya laterrrr.
The British Library has announced it will be opening a new exhibition, Digital Storytelling(2 June – 15 October 2023), that explores how evolving online technologies have changed how writers write, and readers read.
The narratives featured in the exhibition will prompt visitors to consider what new possibilities emerge when they are invited as readers to become a part of the story themselves. Visitors will get to discover how technology can be used to enhance their reading experience, from Zombies, Run!, the widely popular audio fiction fitness app, to Breathe, a ghost story that “follows the reader around”, reacting to users’ real-time location data.
On display for the first time is a playable preview of Windrush Tales,the world’s first interactive narrative game based on the lived experiences of Caribbean immigrants in post-World War II Britain. The game is still in development; the preview is its first public launch, and is made exclusively available for the exhibition by 3-Fold Presents. The exhibition also premieres a new edition of This is a Picture of Wind, with a new sequence of poems inspired by Derek Jarman’s writing about his garden. This is a Picture of Wind was originally written in response to severe storms in the South West of England in 2014. [I found the attribution a little puzzling; hopefully, I haven’t added to the confusion. Note 1: This is a Picture … is a web-based project from J.R. Carpenter, see more in this January 22, 2018 posting on the IOTA Institute website ; Note 2: As for Derek Jarman, there’s this “… if modern gardening has a patron saint, it must be the English artist, filmmaker, and LGBT rights activist Derek Jarman (January 31, 1942–February 19, 1994)”; as for writing about his garden, “The record of this healing creative adventure became Jarman’s Modern Nature (public library)— part memoir and part memorial, …” both Jarman excerpts are from Maria Popova’s April 4, 2021 posting on the marginalian; Note 3: There are accounts of the 2014 storms mentioned in the IOTA posting but sources are not specified]
Items on display will also explore how writers and artists can provide an empathetic look into the lived realities behind the news. Digital Storytelling illustrates this throughA Dictionary of the Revolution, which charts the evolution of political language in Egypt during the uprising in 2011. Another work, c ya laterrrr, is an intimate autobiographical hypertext account of the loss of author Dan Hett’s brother in the 2017 Manchester Arena terrorist attack.
Visitors will also get to experience the wide-ranging possibilities of historical immersion and alternate story-worlds through these emerging formats. The exhibition will feature Astrologaster, an award-winning interactive comedy based on the archival casebooks of Elizabethan medical astrologer Simon Forman, and Clockwork Watch, a transmedia collaborative story set in a steampunk Victorian England.
Giulia Carla Rossi, Curator for Digital Publications in Contemporary British Published Collections and co-curator of the exhibition, says:
“In 2023 we’re celebrating the 50th anniversary of the British Library. Over the last half a century, digital technologies have transformed how we communicate, research and consume media – and this shift is reflected in the growth of digital stories in the Library’s ever-growing collection. In recognition of this evolution in communication, we are thrilled to present Digital Storytelling, the first exhibition of its kind at the British Library. Working closely with artists and creators, the exhibition draws on the Library’s expertise in collecting and preserving innovative online publications and reflects the rapidly evolving concept of interactive writing. At the core of all the items on display are rich narratives that are dynamic, responsive, personalised and evoke for readers the experience of getting lost in a truly immersive story.”
A season of in-person events inspired by the exhibition will feature writers, creators and academics:
Late at the Library: Digital Steampunk. Immerse yourself in the Clockwork Watch story world, party with Professor Elemental and explore 19th century London in Minecraft, Friday 13th October 2023.
As you can see two of the Digital Storytelling events have yet to take place.
This exhibit too has a fee.
You can find the British Library website here. (Click on Visit for the address and other details.) Some exhibits are free and others require a fee. I cannot find information about an all access pass, so, it looks like you’ll have to pay individual fees for the exhibits that require them. Members get free access to all exhibits.
Not sure how I ended up on a National Film Board of Canada (NFB) list but this morning (August 14, 2023) their latest emailed newsletter provided a thrill. From the August 11, 2023 NFB newsletter,
Montreal premiere: Ask Noam Chomsky anything in this interactive VR experience
CHOM5KY vs CHOMSKY is an interactive virtual reality installation created by Sandra Rodriguez, that lets you have a whole conversation with an AI-generated version of public intellectual Noam Chomsky. Be one of the first people to experience it at the NFB Space in Montreal.
Artificial intelligence is everywhere—from the photo enhancer in your smartphone to self-parking cars and the virtual assistant in your kitchen. But what is it exactly?
CHOM5KY vs CHOMSKY: A playful conversation on AI is an engaging and collaborative virtual reality experience that invites us to examine the promises and pitfalls of AI. If machine intelligence is promoted as an inevitable future, we should all be able to ask: What are we hoping to achieve with it? And at what cost?
Visitors use VR headsets to enter the AI world, where they are greeted by CHOM5KY—an artificial entity inspired by and built from the vast array of digital traces of renowned professor Noam Chomsky. CHOM5KY is a friend and serves as a guide, inviting us to peek under the hood of machine-learning systems, and offering thought-provoking takes on how artificial intelligence intersects with human life.
Why Noam Chomsky? [emphasis mine] Professor Chomsky is a philosopher, social critic, historian and political activist, but is perhaps best known for his work in linguistics and cognitive science, the study of the mind. As one of the most recorded and digitized living intellectuals, he has left behind an extensive wake of data traces, enough to create an AI system based on his legacy. Chomsky is also skeptical about the pompous promises made of AI. Which makes him the perfect guide to encourage visitors to question everything they see—and help demystify AI.
Sandra Rodriguez, Ph.D., is a director/producer and sociologist of new media technology. She has written and directed documentary features, web docs and VR, XR and AI experiences that have garnered multiple awards (including a Peabody, best VR awards at DOK Leipzig and the PRIX NUMIX, and the prestigious Golden Nica at the Prix Ars Electronica). She has served as UX lead and consultant for esteemed institutions such as CBC/Radio-Canada and the United Nations. Fascinated by storytelling and emergent technology’s potential for social change, Sandra has created a body of work that spans AI-dance performance, multi-user VR and large-scale XR installations. She is a Sundance Story Lab Fellow and MacArthur Grantee. She is also a Scholar and Lecturer at the Massachusetts Institute of Technology (MIT), where she leads “Hacking XR,” MIT’s first official class in immersive media creation. [Note: XR is Extended Reality]
SCHNELLE BUNTE BILDER Co-producers
The media art collective SCHNELLE BUNTE BILDER was founded in Berlin in 2011. Technically on the cutting edge, their hearts beat for art, culture and science. Together with curators, musicians and other artists, they develop productions for exhibitions and cultural events. Somewhere between art and technology, they combine classical and generative animation with creative coding and critical design to create extraordinary media scenography. Since then, the studio has grown organically and currently consists of a solid core of designers, artists and developers: Michael Burk, Ann-Katrin Krenz, Felix Worseck, Niklas Söder and Johannes Lemke.
Marie-Pier Gauthier Producer
Marie-Pier Gauthier is a producer at the NFB’s Montreal Interactive Studio and has been contributing to this storytelling laboratory for the past 12 years. Whether it’s digital creations on mobile devices or the web, interactive installations, or virtual or augmented reality experiences, she guides and supervises projects by innovative creators working at the crossroads of disciplines, who use a range of storytelling tools, including social networks, code, design, artificial intelligence and conversational robots. Marie-Pier Gauthier has collaborated on more than 100 interactive works (The Enemy, Do Not Track, Way To Go, Motto) that have received over 100 awards in Canada and abroad.
Laurence Dolbec Producer
Laurence Dolbec is a producer in the interactive studio at the National Film Board of Canada. She has more than 12 years of experience working in production, notably for some of Quebec’s most creative institutions including Place des Arts, TOHU and C2 Montréal. Laurence started her career in New York City working for Livestream, which is now part of Vimeo. Her most recent productions explores the spheres of artificial intelligence and knowledge.
Louis-Richard Tremblay Executive Producer
Louis-Richard Tremblay has been an executive producer with the French Program’s Interactive Studio since 2019. He first stepped into a producer role with the NFB in 2013, after a dozen or so years at CBC/Radio-Canada. Fascinated by the power of interactive experiences and media of all kinds, he has guided numerous international co-productions at the NFB, helped produce dozens of award-winning works in Canada and internationally, and regularly participates in panels, conferences and master classes.
CREDITS
Created by Sandra Rodriguez, CHOM5KY vs. CHOMSKY is a co-production by the National Film Board of Canada and SCHNELLE BUNTE BILDER, with support from the Medienboard Berlin-Brandenburg.
…
Two Oddities: Berlin and Moov AI and tickets for the Canadian premiere
World Premiere in Berlin on 4 Nov 2022 at Berlin Science Week
…
CHOM5KY vs CHOMSKY is a Co-production between the National Film Board of Canada and the Studio SCHNELLE BUNTE BILDER based in Berlin, supported by Medienboard Berlin-Brandenburg.
Starting September 6 [2023] in Montreal. Buy your tickets! Explore the world of artificial intelligence with an engaging and collaborative virtual reality experience by Sandra Rodriguez that examines the promises and pitfalls of AI.
The virtual reality experience takes about 25 minutes, but each timeslot runs 45 minutes, to take into account the introduction and getting set up. Please arrive 5 minutes early.
…
How much does a ticket cost?
We offer a general admission ticket, for $26 + applicable taxes.
I’m not sure why Moov AI (based in Montreal, Canada) doesn’t appear in the most recent credits for the project but they host that teaser as an example from one of their projects,
Replicate Noam Chomsky’s persona
…
Chomsky vs. Chomsky is a virtual reality and artificial intelligence immersive experience that showcases an interaction guided by CHOMSKY_AI, the virtual host built from digital traces of Noam Chomsky.
Sandra Rodriguez is the director of this project, which was realized in collaboration with the NFB, the MIT Open Documentary Lab, Schnellebuntebilder, and Moov AI.
…
Using the digital traces left by Noam Chomsky and archives of his interviews, our team built an AI conversational agent that replicates his personality and cynical humor.
This chatbot is at the heart of the technical solution we developed and deployed to support the experience and link the creative vision of the project’s director to the technical requirements to ensure a fluid and immersive experience.
…
AI to power a chatbot.
After the immense success of the prototype at Sundance 2020, the teams involved in the project are hard at work completing the final phase of production of Chomsky vs. Chomsky.
The project team is equipped with a CHOMSKY_AI conversational device that is true to the director’s artistic vision and allows thousands of people worldwide to chat with the digital doppelgänger of such a significant figure in contemporary history.
What a privilege!
Canadians and Noam Chomsky
“Manufacturing Consent: Noam Chomsky and the Media” was one of the most successful feature documentaries in Canadian history. From the Manufacturing Consent (film) Wikipedia entry, Note: Links have been removed,
Manufacturing Consent: Noam Chomsky and the Media[1] is a 1992 documentary film that explores the political life and ideas of linguist, intellectual, and political activist Noam Chomsky. Canadian filmmakers Mark Achbar and Peter Wintonick expand the analysis of political economy and mass media presented in Manufacturing Consent, a 1988 book Chomsky wrote with Edward S. Herman.
Funny, provocative and surprisingly accessible, MANUFACTURING CONSENT explores the political life and ideas of world-renowned linguist, intellectual and political activist Noam Chomsky. Through a dynamic collage of biography, archival gems, imaginative graphics and outrageous illustrations, Mark Achbar and Peter Wintonick’s award-winning documentary highlights Chomsky’s probing analysis of mass media and his critique of the forces at work behind the daily news. Available for the first time anywhere on DVD, MANUFACTURING CONSENT features appearances by journalists Bill Moyers and Peter Jennings, pundit William F. Buckley Jr., novelist Tom Wolfe and philosopher Michel Foucault. This Edition features an exclusive ten-years-after video interview with Chomsky.
Taryn Southern, a storyteller, filmmaker and speaker covering emerging technology who created an award-winning virtual reality (VR) series, as well as AI music and a sci-fi documentary, has joined the judging panel of Insilico Medicine’s Docuthon competition.
Southern has been a sharp observer of the influence and rise of technology. She first gained public notice at age 17 as a semi-finalist on American Idol and later became a YouTube sensation, garnering more than 1 billion views. Soon after, she began actively pursuing her creative interests in emerging technologies and the possibilities of artificial intelligence (AI) and VR to improve human life and potential.
Clinical stage end-to-end AI drug discovery company Insilico Medicine (“Insilico”) launched the Docuthon (documentary hackathon) competition to invite participants from around the world to tell the story of AI drug discovery, using footage captured over the Company’s nearly decade-long journey. The competition provides a way for participants to share the achievements of generative AI in advancing new medicines through the story of Insilico’s lead drug for the rare lung disease idiopathic pulmonary fibrosis, which was discovered and designed by generative AI and has now entered Phase II clinical trials with patients.
Southern says that as a young breast cancer survivor she is personally motivated to support AI drug discovery.
“I can speak as someone with experience who has been diagnosed with a life-threatening disease, in my case stage 3 cancer,” Southern says. “When you’re in that situation you are looking for any possibility of hope. As we are just now beginning to see, AI-enabled drug discovery will rapidly shift the realm of possibility for these patients.”
Since her breakout YouTube success in 2007, Southern has gone on to produce digital content and advise companies such as AirBNB, Conde Naste, Marriott, and Ford. She also released the world’s first pop album composed with AI, created an award-winning animated VR series for Google, made a video clone of herself, and directed and produced a documentary about the future of brain-computer interfaces called I AM HUMAN which premiered at the Tribeca Film Festival in 2019.
Southern is also a three-time Streamy Award nominee, an AT&T Film Award Winner, one of the Top 20 Women in VR (VRScout), and was featured as part of Ford’s national “She’s Got Drive” campaign. She sits on the board of the National Academy of Medicine’s Longevity Challenge, which aims to award breakthroughs in longevity science, and invests in emerging tech companies like Cue, Oura, Vessel, Aspiration, and others.
Docuthon categories include best feature, best short, best curated, and most creative with prize amounts ranging from $4,000 to $8,000. Interested participants are invited to register. Submissions are due Aug. 31, 2023. [emphasis mine]
Southern says she’ll be looking for Docuthon submissions that connect on a human level.
“With storytelling about technology, it’s important to not forget the human piece,” she says,“really focusing on the impact this will have on humanity and the people who are creating the technology and their personal stories.”
“We’re thrilled to have Taryn join us as a Docuthon judge,” says Alex Zhavoronkov, PhD, founder and CEO of Insilico Medicine. “She brings creative vision to all of her projects and really understands how to tell compelling stories around emerging technologies.”
About Insilico Medicine
Insilico Medicine, a clinical stage end-to-end artificial intelligence (AI)-driven drug discovery [AIDD] company, is connecting biology, chemistry, and clinical trials analysis using next-generation AI systems. The company has developed AI platforms that utilize deep generative models, reinforcement learning, transformers, and other modern machine learning techniques for novel target discovery and the generation of novel molecular structures with desired properties. Insilico Medicine is developing breakthrough solutions to discover and develop innovative drugs for cancer, fibrosis, immunity, central nervous system diseases, infectious diseases, autoimmune diseases, and aging-related diseases. www.insilico.com
With an extraordinary design and tremendous calculation speed, artificial intelligence has become an inevitable trend in many areas of drug discovery. It helps identify biological targets, design small molecules for potential cure, and save a ton of research time.
AIDD is good news to the entire human society, but the society has not learnt much about this new technology. When did AI enable the first pipeline? What happened when it failed? How did scientists persist along the way?
Inspired by the movie AlphaGo, we believe the AIDD world deserves its own seminal film. Through the DOCUTHON, we seek to bring together documentary filmmakers and enthusiasts with those who believe in the potential of AI and care about human wellbeing.
Insilico Medicine will share a massive collection of footage showing every step of AI-powered drug discovery, participants are also welcome to use original contents including graphics and animations. A group of judges from both the science and film industries will decide the best edited films based on accuracy, creativity, etc.
This is an excellent opportunity to build your scientific storytelling portfolio AND win a big prize. And all the documentaries might be aired on our official websites and national video platforms. For more details, please visit insilico.com/docuthon or email us at event@insilico.ai
Insilico’s ‘splash’ page features four categories (scroll down about 40% of the way), the judging criteria and more details about submission requirements,
Best Feature
🎬 Best long-form entry ⏱ 16-60 min 💵 $6K award
Best Short
🎬 Best short-form entry ⏱ 3-15 min 💵 $4K award
Best CURATED
🎬 Best edited storyline among all entries 💵 $5K award
Most Creative
🎬 Most creative format or plot among all entries 💵 $5K award
JUDGING CRITERIA
Comprehensive narration of AIDD [artificial intelligence (AI)-driven drug discovery] and development of Insilico Medicine Accurate referral and explanation of scientific facts Creative and interesting approach that holds public attention
You may want to take a look at the Docuthon Competition Agreement (PDF). Not a lawyer—but it looks like you’re signing away almost all of your rights.
There isn’t a list of past winners although Insilico seems to have run the contest at least once before, from the YouTube page featuring the company’s introductory Docuthon video (https://www.youtube.com/watch?v=5LmmXEVyqh4), Note: A link has been removed,
704 views Dec 13, 2022
Artificial intelligence-powered drug discovery company Insilico Medicine announces a first-of-its-kind documentary film hackathon called Docuthon to encourage creative scientific exploration. Participants from around the world are invited to use footage provided by the Company to tell the story of Insilico Medicine and of advances in AI-powered drug discovery, an industry now at a tipping point. Films can be submitted as a documentary short or a documentary feature and cash prizes of up to $8,000 USD will be awarded for Best Feature, Best Short, Best Curated, and Most Creative. Submissions will be judged based on their success at telling the story of AI drug development and of Insilico Medicine, on their scientific accuracy, and on the level of creativity and ability to hold the viewer’s interest. Registration for the Docuthon is open through March 1, 2023. Submissions are due in April 2023, and winners will be announced in May 2023. Additional details can be found at: https://insilico.com/docuthon.
I’m not sure if you have to register for this latest version of the contest as the Eventbrite registration indicates a submission date only so you may want to contact the organizers.
Good luck and don’t forget the August 31, 2023 deadline!
A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.
Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.
“As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”
The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.
Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.
The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.
“The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.
“Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”
The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks.
I have two news releases about this reseach, one from March 2023 focused on the technology and one from May 2023 focused on the graffiti.
Simon Fraser University (SFU) and the technology
While this looks like an impressionist painting (to me), I believe it’s a still from the spatial reality capture of the temple the researchers were studying,
Photo Credit: Simon Fraser University
A March 30, 2023 news item on phys.org announces the latest technology for research on Egyptian graffiti (Note: A link has been removed),
Simon Fraser University [SFU; Canada] researchers are learning more about ancient graffiti—and their intriguing comparisons to modern graffiti—as they produce a state-of-the-art 3D recording of the Temple of Isis in Philae, Egypt.
Working with the University of Ottawa, the researchers published their early findings in Egyptian Archaeology and have returned to Philae to advance the project.
“It’s fascinating because there are similarities with today’s graffiti,” says SFU geography professor Nick Hedley, co-investigator of the project. “The iconic architecture of ancient Egypt was built by those in positions of power and wealth, but the graffiti records the voices and activities of everybody else. The building acts like a giant sponge or notepad for generations of people from different cultures for over 2,000 years.”
As an expert in spatial reality capture, Hedley leads the team’s innovative visualization efforts, documenting the graffiti, their architectural context, and the spaces they are found in using advanced methods like photogrammetry, raking light, and laser scanning. “I’m recording reality in three-dimensions — the dimensionality in which it exists,” he explains.
With hundreds if not thousands of graffiti, some carved less than a millimeter deep on the temple’s columns, walls, and roof, precision is essential.
Typically, the graffiti would be recorded through a series of photographs — a step above hand-drawn documents — allowing researchers to take pieces of the site away and continue working.
Sabrina Higgins, an SFU archaeologist and project co-investigator, says photographs and two-dimensional plans do not allow the field site to be viewed as a dynamic, multi-layered, and evolving space. “The techniques we are applying to the project will completely change how the graffiti, and the temple, can be studied,” she says.
Hedley is moving beyond basic two-dimensional imaging to create a cutting-edge three-dimensional recording of the temple’s entire surface. This will allow the interior and exterior of the temple, and the graffiti, to be viewed and studied at otherwise impossible viewpoints, from virtually anywhere— without compromising detail.
This three-dimensional visualization will also enable researchers to study the relationship between a figural graffito, any graffiti that surrounds it, and its location in relation to the structure of temple architecture.
While this is transformative for viewing and studying the temple and its inscriptions, Hedley points to the big-picture potential of applying spatial reality capture technology to the field of archaeology, and beyond.
“Though my primary role in this project is to help build the definitive set of digital wall plans for the Mammisi at Philae, I’m also demonstrating how emerging spatial reality capture methods can fundamentally change how we gather and produce data and transform our ability to interpret and analyze these spaces. This is a space to watch!” says Hedley.
Did Hedley mean to make a pun with the comment used to end the news release? I hope so.
University of Ottawa and ancient Egyptian graffiti
Egypt’s Philae temple complex is one of the country’s most famed archeological sites. It is dedicated to the goddess Isis, who was one of the most important deities in ancient Egyptian religion. The main temple is a stunning example of the country’s ancient architecture, with its towering columns and detailed carvings depicting Isis and other gods.
In a world-first,The Philae Temple Graffiti Project research team was able to digitally capture the temple’s graffiti by recording and studying a novel group of neglected evidence for personal religious piety dating to the Graeco-Roman and Late Antique periods. By using advanced recording techniques, like photogrammetry and laser scanning, researchers were able to create a photographic recording of the graffiti, digitizing them in 3D to fully capture their details and surroundings.
“This is not only the first study of circa 400 figural graffiti from one of the most famous temples in Egypt, the Isis temple at Philae,” explains project director Dr. Jitse H.F. Dijkstra, a professor of Classics in the Faculty of Arts at the University of Ottawa (uOttawa). “It is the first to use advanced, cutting-edge methods to record these signs of personal piety in an accurate manner and within their architectural context. This is digital humanities in action.”
Professor Dijkstra collaborates in the project with co-investigators Nicholas Hedley, a geography professor at Simon Fraser University (SFU), Sabrina Higgins, an archaeologist and art historian also at SFU, and Roxanne Bélanger Sarrazin, a uOttawa alumna, now a post-doctoral fellow at the University of Oslo.
Temple walls reveal their messages
The newly available state-of-the-art technology has allowed the team to uncover hundreds of 2,000-year-old figural graffiti (a type of graffito consisting of figures or images rather than symbols or text) on the Isis temple’s walls. They have also been able to study them from vantage points that would otherwise have been difficult to reach.
Today, graffiti are seen as an art form that serves as a means of communication, to mark a name or ‘tag,’ or to leave a reference to one’s presence at a given site. The 2,000-year-old graffiti of ancient civilisations served a similar purpose. The research team has found drawings – some carved only 1mm deep – of feet, animals, deities and other figures meant to express the personal religious piety of the maker in the temple complex.
Using 3D renderings of the interior and exterior of the temple, the team gained detailed knowledge about where the graffiti are found on the walls, and their meaning. Although the majority of the graffiti are intended to ask for divine protection, others were playful gameboards; Old Egyptian temples functioned as a focus of worship and more ephemeral activities.
A first for this UNESCO heritage site, the innovative fieldwork is at the forefront of Egyptian archaeology and digital humanities (which explores human interactions and culture).
“What ancient Egyptian graffiti have in common with modern graffiti is they are left in places not originally foreseen for that purpose,” adds Professor Dijkstra. “The big difference, however, is that ancient Egyptian graffiti were left by individuals at temples in order to receive divine protection forever, which is why we find hundreds of graffiti on every Egyptian temple’s walls.”
The Philae Temple Graffiti Project was initiated in 2016 under the aegis of the Philae Temple Text Project of the Austrian Academy of Sciences and the Swiss Institute for Architectural and Archaeological Research on Ancient Egypt, Cairo. It is funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) and aims to study the figural graffiti from one of the most spectacular temple complexes of Egypt, Philae, in order to better understand the daily practice of the goddess’ worship.
The study’s first findings were published in Egyptian Archeology
Fascinatingly for a project where new technology has been vital, the work has been published in a periodical (Egyptian Archaeology) that is not available online. It is published by the Egypt Exploration Society (EES) which also produces the similarly titled “Journal of Egyptian Archaeology”.
You can purchase the relevant issue of “Egyptian Archaeology” here. The EES describes it as a “… full-colour magazine, reporting on current excavations, surveys and research in Egypt and Sudan, showcasing the work of the EES as well as of other missions and researchers.”
Here’s a citation for the article,
Figures that Matter: Graffiti of the Isis Temple at Philae by Roxanne Bélanger Sarrazin, Jitse Dijkstra, Nicholas Hedley and Sabrina Higgins. Egyptian Archaeology, Spring 2022, [issue no.] 60.
I received an April 5, 2023 announcement for the 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) via email. Understandably given that it’s an Institute of Electrical and Electronics Engineers (IEEE) conference, they’re looking for submissions focused on developing the technology,
Last days to submit your contribution to our Special Session on “eXtended Reality as a gateway to the Metaverse: Practices, Theories, Technologies and Applications” – IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) – October 25-27, 2023 – Milan – https://metroxraine.org/special-session-17.
I want to remind you that the deadline of April 7 [2023] [extended to April 14, 2023 as per April 11, 2023 notice received via email] is for the submission of a 1-2 page Abstract or a Graphical Abstract to show the idea you are proposing. You will have time to finalise your work by the deadline of May 15 [2023].
Please see the CfP below for details and forward it to colleagues who might be interested in contributing to this special session.
I’m looking forward to meeting you, virtually or in your presence, at IEEE MetroXRAINE 2023.
Best regards, Giuseppe Caggianese
Research Scientist National Research Council (CNR) [Italy] Institute for High-Performance Computing and Networking (ICAR) Via Pietro Castellino 111, 80131, Naples, Italy
Here’s are specific for the Special Session’s Call for Papers (from the April 5, 2023 email announcement),
Call for Papers – Special Session on: “EXTENDED REALITY AS A GATEWAY TO THE METAVERSE: PRACTICES, THEORIES, TECHNOLOGIES AND APPLICATIONS” https://metroxraine.org/special-session-17
2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence, and Neural Engineering (IEEE MetroXRAINE 2023) https://metroxraine.org/
October 25-27, 2023 – Milan, Italy.
SPECIAL SESSION DESCRIPTION ————————- The fast development of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions over the last few years are transforming how people interact, work, and communicate. The eXtended Reality (XR) term encloses all those immersive technologies that can shift the boundaries between digital and physical worlds to realize the metaverse. According to tech companies and venture capitalists, the metaverse will be a super-platform that convenes sub-platforms: social media, online video games, and ease-of-life apps, all accessible through the same digital space and sharing the same digital economy. Inside the metaverse, virtual worlds will allow avatars to carry out all human endeavours, including creation, display, entertainment, social, and trading. Thus, the metaverse will evolve how users interact with brands, intellectual properties, health services, cultural heritage, and each other things on the Internet. A user could join friends to play a multiplayer game, watch a movie via a streaming service and then attend a university course precisely the same as in the real world. The metaverse development will require new software architecture that will enable decentralized and collaborative virtual worlds. These self-organized virtual worlds will be permanent and will require maintenance operations. In addition, it will be necessary to design an efficient data management system and prevent privacy violations. Finally, the convergence of physical reality, virtually enhanced, and an always-on virtual space highlighted the need to rethink the actual paradigms for visualization, interaction, and sharing of digital information, moving toward more natural, intuitive, dynamically customizable, multimodal, and multi-user solutions. This special session aims to focus on exploring how the realization of the metaverse can transform certain application domains such us: (i) healthcare, in which the metaverse solutions can, for instance, improve the communication between patients and physicians; (ii) cultural heritage, with potentially more effective solutions for tourism guidance, site maintenance, and heritage object conservation; and (iii) industry, where to enable data-driven decision making, smart maintenance, and overall asset optimisation.
The topics of interest include, but are not limited to, the following:
Hardware/Software Architectures for metaverse
Decentralized and Collaborative Architectures for metaverse
Interoperability for metaverse
Tools to help creators to build the metaverse0
Operations and Maintenance in metaverse
Data security and privacy mechanisms for metaverse
Cryptocurrency, token, NFT Solutions for metaverse
Fraud-Detection in metaverse
Cyber Security for metaverse
Data Analytics to Identify Malicious Behaviors in metaverse
Blockchain/AI technologies in metaverse
Emerging Technologies and Applications for metaverse
New models to evaluate the impact of the metaverse
Interactive Data Exploration and Presentation in metaverse
Human-Computer Interaction for metaverse
Human factors issues related to metaverse
Proof-of-Concept in Metaverse: Experimental Prototyping and Testbeds
IMPORTANT DATES
Abstract Submission Deadline: April 7, 2023 (extended) NOTE: 1-2 pages abstract or a graphical abstract Final Paper Submission Deadline: May 15, 2023 (extended) Full Paper Acceptance Notification: June 15, 2023 Final Paper Submission Deadline: July 31, 2023
SUBMISSION AND DECISIONS ———————— Authors should prepare an Abstract (1 – 2 pages) that clearly indicates the originality of the contribution and the relevance of the work. The Abstract should include the title of the paper, names and affiliations of the authors, an abstract, keywords, an introduction describing the nature of the problem, a description of the contribution, the results achieved and their applicability.
When the first review process has been completed, authors receive a notification of either acceptance or rejection of the submission. If the abstract has been accepted, the authors can prepare a full paper. The format for the full paper is identical to the format for the abstract except for the number of pages: the full paper has a required minimum length of five (5) pages and a maximum of six (6) pages. Full Papers will be reviewed by the Technical Program Committee. Authors of accepted full papers must submit the final paper version according to the deadline, register for the workshop, and attend to present their papers. The maximum length for final papers is 6 pages. All contributions will be peer-reviewed and acceptance will be based on quality, originality and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Submissions must be written in English and prepared according to the IEEE Conference Proceedings template. LaTeX and Word templates and an Overleaf sample project can be found at: https://metroxraine.org/initial-author-instructions.
The papers must be submitted in PDF format electronically via EDAS online submission and review system: https://edas.info/newPaper.php?c=30746. To submit abstracts or draft papers to the special session, please follow the submission instructions for regular sessions, but remind to specify the special session to which the paper is directed.
The special session organizers and other external reviewers will review all submissions.
CONFERENCE PROCEEDINGS ———————————– All contributions will be peer-reviewed, and acceptance will be based on quality, originality, and relevance. Accepted papers will be submitted for inclusion into IEEE Xplore Digital Library.
Extended versions of presented papers are eligible for post-publication; more information will be provided soon.
The University of British Columbia (UBC; Vancouver, Canada) partnership between its Stewart Blusson Quantum Matter Institute (Blusson QMI), its Morris & Helen Belkin Art Gallery (the Belkin), and its Department of Physics and Astronomy (UBC PHAS) is known as Ars Scientia. (See my September 6, 2021 posting for more; scroll down to the Ars Scientia subhead.)
It’s been a while since I’ve seen any notices about Ars Scientia events but the Belkin Gallery announced three in a February 15, 2023 notice (received via email),
Ars Scientia Artist Talks
Room 311, Brimacombe Building, 2355 East Mall, UBC
Join us for a series of artist talks hosted at UBC’s Stewart Blusson Quantum Matter Institute (Blusson QMI). Our current cohort of Ars Scientia artists-in-residence have formed collaborative partnerships with scientists and engineers while embedded at Blusson QMI.
Tuesday, February 21 [2023] at 2 pm
JG Mair
Tuesday, March 28 [2023] at 1 pm
Scott Billings
Tuesday, April 2 [2023] at 2 pm
Timothy Taylor
IMAGE (ABOVE): AN ARS SCIENTIA COLLABORATION BETWEEN VISUAL ARTIST JG MAIR AND PHYSICIST ALANNAH HALLAS AT BLUSSON QMI; THE TWO WORKED TOGETHER IN HALLAS’S LAB TO TURN “INSIGHTFUL FAILURES” OF HIGH-ENTROPY OXIDES (A TYPE OF QUANTUM MATERIAL) INTO AN ARTIST’S MEDIUM – PAINT. PHOTO: RACHEL TOPHAM PHOTOGRAPHY.
Artist Talk with JG Mair, Tuesday, 21 February [2023] at 2 pm
Please join visual and media artist JG Mair for a discussion about his art practice and experiences as a collaborative participant in the Ars Scientia residency. As part of his talk, Mair will present one of his major works, Chroma Chamber, a web-based new media art installation that investigates human expectations of vision and machine algorithms by programmatically collating real-time Google image results to surround the viewer with the distilled colour of the words they speak. Visit Blusson QMI for more details. [Note 1: On the Blusson QMI page, the talk is titled: Algorithmic allegories by JG Mair; Note 2: You’ll find a map showing the Brimacombe building location.]
JG Mair is a Vancouver-based multidisciplinary artist and media designer specializing in mixed media, web and audio. He has a BFA from the University of Victoria and a BEd from the University of British Columbia. Mair has been working in the areas of both traditional and digital contemporary art and as a sound designer for various game studios developing titles for publishers including Apple, Electronic Arts, Microsoft and Netflix. Mair has had exhibitions and residencies in Canada, USA, South Korea and Japan.
Scott Billings is a visual artist, industrial designer and engineer based in Vancouver. His sculptures and video installations have been described as existing somewhere between cinema and automata. Centering on issues of animality, mobility and spectatorship, Billings’s work examines the mimetic relationship between the physical apparatus and the virtual motion it delivers. In what ways does the apparatus itself reveal both the mechanisms of causality and its own dormant animal quality? Billings addresses this question under the pursuit of the technological conundrum and a preoccupation with precise geometry and logic. Billings holds an MFA from the University of British Columbia, a BFA from Emily Carr University and a BASc in Mechanical Engineering from the University of Waterloo. He teaches at UBC and Emily Carr as a sessional instructor. Billings is represented by Wil Aballe Art Projects.
Timothy Taylor is an Associate Professor and Graduate Advisor at the School of Creative Writing. He is also a bestselling and award-winning author of eight book-length works of fiction and nonfiction, a prolific journalist, and creative nonfiction writer. In addition to his writing and teaching at UBC, Taylor travels widely, having in recent years spent time on assignment in China, Tibet, Japan, Dubai, Brazil, the Canadian arctic and other places. He lives in Point Grey Vancouver with his wife, his son, and a pair of Brittany Spaniels named Keaton and Murphy.
Hopefully, the talk is a little more accessible than its description.
I’ve started to think that paper books will be on an ‘endangered species’ list in the not too distant future. Now, it seems researchers at the University of Surrey (UK) may have staved off that scenario according to an August 3, 2022 news item on ScienceDaily,
Augmented reality might allow printed books to make a comeback against the e-book trend, according to researchers from the University of Surrey.
Surrey has introduced the third generation (3G) version of its Next Generation Paper (NGP) project, allowing the reader to consume information on the printed paper and screen side by side.
Dr Radu Sporea, Senior lecturer at the Advanced Technology Institute (ATI), comments:
“The way we consume literature has changed over time with so many more options than just paper books. Multiple electronic solutions currently exist, including e-readers and smart devices, but no hybrid solution which is sustainable on a commercial scale.
“Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”
Power efficiency and pre-printed conductive paper are some of the new features which allow Surrey’s augmented books to now be manufactured on a semi-industrial scale. With no wiring visible to the reader, Surrey’s augmented reality books allow users to trigger digital content with a simple gesture (such as a swipe of a finger or turn of a page), which will then be displayed on a nearby device.
George Bairaktaris, Postgraduate researcher at the University of Surrey and part of the Next Generation Paper project team, said:
“The original research was carried out to enrich travel experiences by creating augmented travel guides. This upgraded 3G model allows for the possibility of using augmented books for different areas such as education. In addition, the new model disturbs the reader less by automatically recognising the open page and triggering the multimedia content.”
“What started as an augmented book project, evolved further into scalable user interfaces. The techniques and knowledge from the project led us into exploring organic materials and printing techniques to fabricate scalable sensors for interfaces beyond the a-book”.
…
Caption: Next Generation Paper book example Credit: Courtesy of Advanced Technology Institute at the University of Surrey
Here’s a link to and a citation for the paper,
Augmented Books: Hybrid Electronics Bring Paper to Life by Georgios Bairaktaris, Brice Le Borgne, Vikram Turkani, Emily Corrigan-Kavanagh, David M. Frohlich, Radu A. Sporea. IEEE Pervasive Computing (early access) PrePrints pp. 1-8, DOI: 10.1109/MPRV.2022.3181440 Published: July 12, 2022