I received a May 31, 2023 ‘newsletter’ (via email) from Simon Fraser University’s (SFU) Metacreation Lab for Creative Artificial Intelligence and the first item celebrates some current and past work,
International Conference on New Interfaces for Musical Expressions | NIME 2023 May 31 – June 2 | Mexico City, Mexico
We’re excited to be a part of NIME 2023, launching in Mexico City this week!
As part of the NIME Paper Sessions, some of Metacreation’s labs and affiliates will be presenting a study based on case studies of musicians playing with virtual musical agents. Titled eTu{d,b}e, the paper was co-authored by Tommy Davis, Kasey LV Pocius, and Vincent Cusson, developers of the eTube instrument, along with music technology and interface researchers Marcelo Wanderley and Philippe Pasquier. Learn about the project and listen to sessions involving human and non-human musicians.
This research project involved experimenting with Spire Muse, a virtual performance agent co-developed by Metacreation Lab members. The paper introducing the system was awarded the best paper award at the 2021 International Conference on New Interfaces for Musical Expression (NIME).
Learn more about the NIME2023 conference and program at the link below, which will also present a series of online music concerts later this week.
Coming up later this summer and also from the May 31, 2023 newsletter,
Evaluating Human-AI Interaction for MMM-C: a Creative AI System for Music Composition | IJCAI [2023 International Joint Conference on Artificial Intelligence] Preview
For those following the impact of AI on music composition and production, we would like to share a sneak peek of a review of user experiences using an experimental AI-composition tool [Multi-Track Music Machine (MMM)] integrated into the Steinberg Cubase digital audio workstation. Conducted in partnership with Steinberg, this study will be presented at the 2023 International Joint Conference on Artificial Intelligence (IJCAI2023), as part of the Arts and Creativity track of the conference. This year’s IJCAI conference taking place in Macao from August 19th to Aug 25th, 2023.
…
The conference is being held in Macao (or Macau), which is officially (according to its Wikipedia entry) the Macao Special Administrative Region of the People’s Republic of China (MSAR). It has a longstanding reputation as an international gambling and party mecca comparable to Las Vegas.
This is a cleaned up version of the Ada Lovelace story,
A pioneer in the field of computing, she has a remarkable life story as noted in this October 13, 2014 posting, and explored further in this October 13, 2015 posting (Ada Lovelace “… manipulative, aggressive, a drug addict …” and a genius but was she likable?) published to honour the 200th anniversary of her birth.
In a December 8, 2022 essay for The Conversation, Corinna Schlombs focuses on skills other than mathematics that influenced her thinking about computers (Note: Links have been removed),
…
Growing up in a privileged aristocratic family, Lovelace was educated by home tutors, as was common for girls like her. She received lessons in French and Italian, music and in suitable handicrafts such as embroidery. Less common for a girl in her time, she also studied math. Lovelace continued to work with math tutors into her adult life, and she eventually corresponded with mathematician and logician Augustus De Morgan at London University about symbolic logic.
Lovelace drew on all of these lessons when she wrote her computer program – in reality, it was a set of instructions for a mechanical calculator that had been built only in parts.
The computer in question was the Analytical Engine designed by mathematician, philosopher and inventor Charles Babbage. Lovelace had met Babbage when she was introduced to London society. The two related to each other over their shared love for mathematics and fascination for mechanical calculation. By the early 1840s, Babbage had won and lost government funding for a mathematical calculator, fallen out with the skilled craftsman building the precision parts for his machine, and was close to giving up on his project. At this point, Lovelace stepped in as an advocate.
To make Babbage’s calculator known to a British audience, Lovelace proposed to translate into English an article that described the Analytical Engine. The article was written in French by the Italian mathematician Luigi Menabrea and published in a Swiss journal. Scholars believe that Babbage encouraged her to add notes of her own.
In her notes, which ended up twice as long as the original article, Lovelace drew on different areas of her education. Lovelace began by describing how to code instructions onto cards with punched holes, like those used for the Jacquard weaving loom, a device patented in 1804 that used punch cards to automate weaving patterns in fabric.
Having learned embroidery herself, Lovelace was familiar with the repetitive patterns used for handicrafts. Similarly repetitive steps were needed for mathematical calculations. To avoid duplicating cards for repetitive steps, Lovelace used loops, nested loops and conditional testing in her program instructions.
…
Finally, Lovelace recognized that the numbers manipulated by the Analytical Engine could be seen as other types of symbols, such as musical notes. An accomplished singer and pianist, Lovelace was familiar with musical notation symbols representing aspects of musical performance such as pitch and duration, and she had manipulated logical symbols in her correspondence with De Morgan. It was not a large step for her to realize that the Analytical Engine could process symbols — not just crunch numbers — and even compose music.
…
… Lovelace applied knowledge from what we today think of as disparate fields in the sciences, arts and the humanities. A well-rounded thinker, she created solutions that were well ahead of her time.
For more about Jacquard looms and computing, there’s Sarah Laskow’s September 16, 2014 article for The Atlantic, which includes some interesting details (Note: Links have been removed),
…, one of the very first machines that could run something like what we now call a “program” was used to make fabric. This machine—a loom—could process so much information that the fabric it produced could display pictures detailed enough that they might be mistaken for engravings.
Like, for instance, the image above [as of March 3, 2023, the image is not there]: a woven piece of fabric that depicts Joseph-Marie Jacquard, the inventor of the weaving technology that made its creation possible. As James Essinger recounts in Jacquard’sWeb, in the early 1840s Charles Babbage kept a copy at home and would ask guests to guess how it was made. They were usually wrong.
.. At its simplest, weaving means taking a series of parallel strings (the warp) lifting a selection of them up, and running another string (the weft) between the two layers, creating a crosshatch. …
The Jacquard loom, though, could process information about which of those strings should be lifted up and in what order. That information was stored in punch cards—often 2,000 or more strung together. The holes in the punch cards would let through only a selection of the rods that lifted the warp strings. In other words, the machine could replace the role of a person manually selecting which strings would appear on top. Once the punch cards were created, Jacquard looms could quickly make pictures with subtle curves and details that earlier would have take months to complete. …
… As Ada Lovelace wrote him: “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.”
For anyone who’s very curious about Jacquard looms, there’s a June 25, 2019 Objects and Stories article (Programming patterns: the story of the Jacquard loom) on the UK’s Science and Industry Museum (in Manchester) website.
It seems to be GLUBS time again (GLUBS being the Global Library of Underwater Biological Sounds). In fact it’s an altogether acoustical time for the ocean. First, a mystery fish,
That sounds a bit like a trumpet to me. (I last wrote about GLUBS in a March 4, 2022 posting where it was included under the ‘Marine sound libraries’ subhead.)
The latest about GLUBS and aquatic sounds can be found in an April 26, 2023 Rockefeller University news release on EurekAlert, Note 1: I don’t usually include the heads but I quite like this one and even stole part of it for this posting; Note 2: There probably should have been more than one news release; Note 3: For anyone who doesn’t have time to read the entire news release, I have a link immediately following the news release to an informative and brief article about the work,
Do fish bay at the moon? Can their odd songs identify Hawaiian mystery fish? Eavesdropping scientists progress in recording, understanding ocean soundscapes
Using hydrophones to eavesdrop on a reef off the coast of Goa, India, researchers have helped advance a new low-cost way to monitor changes in the world’s murky marine environments.
Reporting their results in the Journal of the Acoustical Society of America (JASA), the scientists recorded the duration and timing of mating and feeding sounds – songs, croaks, trumpets and drums – of 21 of the world’s noise-making ocean species.
With artificial intelligence and other pioneering techniques to discern the calls of marine life, they recorded and identified
snapping shrimp (audio: https://bit.ly/3mTQ0gd), including commercially-valuable tiger prawns.
Some species within the underwater community work the early shift and ruckus from 3 am to 1.45 pm, others work the late shift and ruckus from 2 pm to 2.45 am, while the plankton predators were “strongly influenced by the moon.”
Also registered: the degree of difference in the abundance of marine life before and after a monsoon.
The paper concludes that hydrophones are a powerful tool and “overall classification performance (89%) is helpful in the real-time monitoring of the fish stocks in the ecosystem.”
The team, including Bishwajit Chakraborty, a leader of the International Quiet Ocean Experiment (IQOE), benefitted from archived recordings of marine species against which they could match what they heard, including:
Snapping shrimp (audio: https://bit.ly/41NZWH2), whose sounds baby oysters reportedly like to follow
Also captured was a “buzz” call of unknown origin (https://bit.ly/3GZdRSI), one of the oceans’ countless marine life mysteries.
With a contribution to the International Quiet Ocean Experiment, the research will be discussed at an IQOE meeting in Woods Hole, MA, USA, 26-27 April [2023].
Advancing the Global Library of Underwater Biological Sounds (GLUBS)
That event will be followed April 28-29 by a meeting of partners in the new Global Library of Underwater Biological Sounds (GLUBS), a major legacy of the decade-long IQOE, ending in 2025.
GLUBS, conceived in late 2021 and currently under development, is designed as an open-access online platform to help collate global information and to broaden and standardize scientific and community knowledge of underwater soundscapes and their contributing sources.
It will help build short snippets and snapshots (minutes, hours, days long recordings) of biological, anthropogenic, and geophysical marine sounds into full-scale, tell-tale underwater baseline soundscapes.
Especially notable among many applications of insights from GLUBS information: the ability to detect in hard-to-see underwater environments and habitats how the distribution and behavior of marine life responds to increasing pressure from climate change, fishing, resource development, plastic, anthropogenic noise and other pollutants.
“Passive acoustic monitoring (PAM) is an effective technique for sampling aquatic systems that is particularly useful in deep, dark, turbid, and rapidly changing or remote locations,” says Miles Parsons of the Australian Institute of Marine Science and a leader of GLUBS.
He and colleagues outline two primary targets:
Produce and maintain a list of all aquatic species confirmed or anticipated to produce sound underwater;
Promote the reporting of sounds from unknown sources
Odd songs of Hawaii’s mystery fish
In this latter pursuit, GLUBS will also help reveal species unknown to science as yet and contribute to their eventual identification.
For example, newly added to the growing global collection of marine sounds are recent recordings from Hawaii, featuring the baffling
now part of an entire YouTube channel (https://bit.ly/3H5Ly54) dedicated to marine life sounds in Hawaii and elsewhere (e.g. this “complete and total mystery from the Florida Keys”: https://bit.ly/41w1Xbc(Annie Innes-Gold, Hawai’i Institute of Marine Biology; processed by Jill Munger, Conservation Metrics, Inc.)
Says Dr. Parsons: “Unidentified sounds can provide valuable information on the richness of the soundscape, the acoustic communities that contribute to it and behavioral interactions among acoustic groups. However, unknown, cryptic and rare sounds are rarely target signals for research and monitoring projects and are, therefore, largely unreported.”
The many uses of underwater sound
Of the roughly 250,000 known marine species, scientists think all fully-aquatic marine mammals (~146, including sub-species) emit sounds, along with at least 100 invertebrates, 1,000 of the world’s ~35,000 known fish species, and likely many thousands more.
GLUBS aims to help delineate essential fish habitat and estimate biomass of a spawning aggregation of a commercially or recreationally important soniferous species.
In one scenario of its many uses, a one-year, calibrated recording can provide a proxy for the timing, location and, under certain circumstances, numbers of ‘calling’ fishes, and how these change throughout a spawning season.
It will also help evaluate the degradation and recovery of a coral reef.
GLUBS researchers envision, for example, collecting recordings from a coral reef that experienced a cyclone or other extreme weather event, followed by widespread bleaching. Throughout its restoration, GLUBS audio data would be matched with and augment a visual census of the fish assemblage at multiple timepoints.
Oil and gas, wind power and other offshore industries will also benefit from GLUBS’ timely information on the possible harms or benefits of their activities.
Other IQOE legacies include:
Manta (bitbucket.org/CLO-BRP/manta-wiki/wiki/Home), a mechanism created by world experts from academia, industry, and government to help standardize ocean sound recording data, facilitating its comparability, pooling and visualization.
OPUS, an Open Portal to Underwater Sound being tested at Alfred Wegener Institute in Bremerhaven, Germany to promote the use of acoustic data collected worldwide, providing easy access to MANTA-processed data, and
The first comprehensive database and map of the world’s 200+ known hydrophones recording for ecological purposes
Marine sounds and COVID-19
The IQOE’s early ambition of humanity’s maritime noise being minimized for a day or week was unexpectedly met in spades when the COVID-19 pandemic began.
Virus control measures led to “sudden and sometimes dramatic reductions in human activity in sectors such as transport, industry, energy, tourism, and construction,” with some of the greatest reductions from March to June 2020 – a drop of up to 13% in container ship traffic and up to 42% in passenger ships.
Other IQOE accomplishments include achieving recognition of ocean sound as an Essential Ocean Variable (EOV) within the Global Ocean Observing System, underlining its helpfulness in monitoring
climate change (the extent and breakup of sea ice; the frequency and intensity of wind, waves and rain)
ocean health (biodiversity assessments: monitoring the distribution and abundance of sound-producing species)
impacts of human activities on wildlife, and
nuclear explosions, foreign/illegal/threatening vessels, human activities in protected areas, and underwater earthquakes that can generate tsunamis
The Partnership for Observation of the Global Ocean (POGO) funded an IQOE Working Group in 2016, which quickly identified the lack of ocean sound as a variable measured by ocean observing systems. This group developed specifications for an Ocean Sound Essential Ocean Variable (EOV) by 2018, which was approved by the Global Ocean Observing System in 2021. IQOE has since developed the Ocean Sound EOV Implementation Plan, reviewed in 2022 and ready for public debut at IQOE’s meeting April 26.
One of IQOE’s originators, Jesse Ausubel of The Rockefeller University’s Programme for the Human Environment, says the programme has drawn attention to the absence of publicly available time series of sound on ecologically important frequencies throughout the global ocean.
“We need to listen more in the blue symphony halls. Animal sounds are behavior, and we need to record and understand the sounds, if we want to know the status of ocean life,” he says.
The program “has provided a platform for the international passive acoustics community to grow stronger and advocate for inclusion of acoustic measurements in national, regional, and global ocean observing systems,” says Prof. Peter Tyack of the University of St. Andrew’s, who, with Steven Simpson, guide the IQOE International Scientific Steering Committee.
“The ocean acoustics and bioacoustics communities had no experience in working together globally, and coverage is certainly not global; there are many gaps. IQOE has begun to help these communities work together globally, and there is still progress to be made in networking and in expanding the deployment of hydrophones, adds Prof. Ausubel.
A description of the project’s history and evaluation to date is available at https://bit.ly/3H7FCbN.
Encouraging greater worldwide use of hydrophones
According to Dr. Parsons, “hydrophones are now being deployed in more locations, more often, by more people, than ever before,”
To celebrate that, and to mark World Oceans Day, June 8 [2023], GLUBS recently put out a call to hydrophone operators to share marine life recordings made from 7 to 9 June, so far receiving interest from 124 hydrophone operators in 62 organizations from 29 countries and counting. The hydrophones will be retrieved over the following months with the full dataset expected sometime in 2024.
They also plan to make World Oceans Passive Acoustic Monitoring (WOPAM) Day an annual event – a global collaborative study of aquatic soundscapes, salt, brackish or freshwater – the marine world’s answer to the U.S. Audubon Society’s 123-year-old Christmas Bird Count.
Interested researchers with hydrophones [emphasis mine] already planned [sic] to be in the water on June 8 [2023] are invited to contact Miles Parsons (m.parsons@aims.gov.au) or Steve Simpson (s.simpson@bristol.ac.uk).
Becky Ferreira has written April 26, 2023 article for Motherboard that provides more insight into the work being done offshore in Goa and elsewhere,
…
To better understand the rich reef ecosystems of Goa, a team of researchers at the Indian Council of Scientific and Industrial Research’s National Institute of Oceanography (CSIR-NIO) placed a hydrophone near Grande Island at a depth of about 65 feet. Over the course of several days, the instrument captured hundreds of recordings of the choruses of “soniferous” (sound-making)fish, the high-frequency noises of shrimp, and the rumblings of boats passing near the area.
…
“Our research, for the longest time, predominantly involved active acoustics systems in understanding habitats (bottom roughness, etc., using multibeam sonar),” said Bishwajit Chakraborty, a marine scientist at CSIR-NIO who co-authored the study, in an email to Motherboard. “By using active sonar systems, we add sound signals to water media which severely affects marine life.”
…
Here’s a link to and a citation for the paper mentioned at the beginning of the news release,
Interested researchers with hydrophones [emphasis mine] already planned [sic] to be in the water on June 8 [2023] are invited to contact Miles Parsons (m.parsons@aims.gov.au) or Steve Simpson (s.simpson@bristol.ac.uk).
Perimeter institute for Theoretical Physics (located in Waterloo, Ontario, Canada) is presenting one of its public lectures according to a March 31, 2023 PI announcement (received via email),
The Jazz of Physics FRIDAY, APRIL 14 [2023] at 7:00 pm ET Stephon Alexander, Brown University
Take a musical journey of the mind and the cosmos with scientist and musician Stephon Alexander. A professor of physics at Brown University, Alexander began his journey to science in high school, where a teacher introduced him to the magic of jazz, fostering a connection between John Coltrane and Albert Einstein.
In his April 14 [2023] lecture, Alexander will demonstrate how the search for answers to deep cosmological puzzles has parallels to jazz improvisation. He will also explore new ways that music, particularly jazz, mirrors concepts in modern physics such as quantum mechanics, general relativity, and the early universe.
The Black Hole Bistro will not be available for dinner service the evening of the event.
Don’t forget to try to sign into your PI account before Monday morning, so you are ready when tickets go on sale.
If you didn’t get tickets for the lecture, not to worry – you can always catch the livestream on Inside the Perimeter or watch it on YouTube after the fact.
I checked and, at this point, you have to go on a waiting list for tickets. Here’s more about the process and your other options, from the The Jazz of Physics event page,
Waiting Line On the night of the lecture, there will be a waiting line at Perimeter for last minute cancelled tickets. Come to Perimeter after 6:00 PM and pick up a waiting line chit from the ticket table. While you wait, participate in pre-lecture activities. An announcement will be made in the Atrium at 6:50 PM if theatre seats are available. Note: You must arrive in person to be part of the waiting line, and be in the Atrium when the announcement is made.
No Disappointments Everyone who comes to Perimeter will be able to enjoy this lecture. If you do not manage to obtain a theatre ticket, you can join our waiting line and watch live from the quiet of the Time Room.
Live Webcast
All of our lectures are streamed live. You can watch the live stream of this lecture here [not yet active; check on day of event], or watch the recordings at your leisure on our YouTube Channel.
Stephon Alexander has his own website here where you’ll find (amongst other things like his TEDx talk and various interviews; he doesn’t seem to have updated the content since 2022) his 2021 book “Fear of a Black Universe; An Outsider’s Guide to the Future of Physics.” You can see what Kirkus Reviews had to say about the book here.
As of December 30, 2022, Canadian copyright (one of the three elements of intellectual property; the other two: patents and trademarks) will be extended for another 20 years.
Mike Masnick in his November 29, 2022 posting on Techdirt explains why this is contrary to the intentions for establishing copyright in the first place, Note: Links have been removed,
… it cannot make sense to extend copyright terms retroactively. The entire point of copyright law is to provide a limited monopoly on making copies of the work as an incentive to get the work produced. Assuming the work was produced, that says that the bargain that was struck was clearly enough of an incentive for the creator. They were told they’d receive that period of exclusivity and thus they created the work.
Going back and retroactively extending copyright then serves no purpose. Creators need no incentive for works already created. The only thing it does is steal from the public. That’s because the “deal” setup by governments creating copyright terms is between the public (who is temporarily stripped of their right to share knowledge freely) and the creator. But if we extend copyright term retroactively, the public then has their end of the bargain (“you will be free to share these works freely after such-and-such a date”) changed, with no recourse or compensation.
…
Canada has quietly done it: extending copyrights on literary, dramatic or musical works and engravings from life of the author plus 50 years year to life of the author plus 70 years. [emphasis mine]
…
Masnick pointed to a November 23, 2022 posting by Andrea on the Internet Archive Canada blog for how this will affect the Canadian public,
… we now know that this date has been fixed as December 30, 2022, meaning that no new works will enter the Canadian public domain for the next 20 years.
A whole generation of creative works will remain under copyright. This might seem like a win for the estates of popular, internationally known authors, but what about more obscure Canadian works and creators? With circulation over time often being the indicator of ‘value’, many 20th century works are being deselected from physical library collections. …
Edward A. McCourt (1907-1972) is an example of just one of these Canadian creators. Raised in Alberta and a graduate of the University of Alberta, Edward went on to be a Rhodes Scholar in 1932. In 1980, Winnifred Bogaards wrote that:
“[H]e recorded over a period of thirty years his particular vision of the prairies, the region of Canada which had irrevocably shaped his own life. In that time he published five novels and forty-three short stories set (with some exceptions among the earliest stories) in Western Canada, three juvenile works based on the Riel Rebellion, a travel book on Saskatchewan, several radio plays adapted from his western stories, The Canadian West in Fiction (the first critical study of the literature of the prairies), and a biography of the 19th century English soldier and adventurer, Sir William F. Butler… “
In Bogaards’ analysis of his work, “Edward McCourt: A Reassessment” published in the journal Studies in Canadian Literature, she notes that while McCourt has suffered in obscurity, he is often cited along with his contemporaries Hugh MacLennan, Robertson Davies and Irving Layton; Canadian literary stars. Incidentally, we will also wait an additional 20 years for their works to enter the public domain. The work of Rebecca Giblin, Jacob Flynn, and Francois Petitjean, looking at ‘What Happens When Books Enter the Public Domain?’ is relevant here. Their study shows concretely and empirically that extending copyright has no benefit to the public at all, and only benefits a very few wealthy, well known estates and companies. This term extension will not encourage the publishers of McCourt’s works to invest in making his writing available to a new generation of readers.
…
This 20 year extension can trace its roots to the trade agreement between the US, Mexico, and Canada (USMCA) that replaced the previous North American Free Trade Agreement (NAFTA), as of July 1, 2020. This is made clear in Michael Geist’s May 2, 2022 Law Bytes podcast where he discusses with Lucie Guibault the (then proposed) Canadian extension in the context of international standards,
…
Lucie Guibault is an internationally renowned expert on international copyright law, a Professor of Law and Associate Dean at Schulich School of Law at Dalhousie University, and the Associate Director of the school’s Law and Technology Institute.
…
It’s always good to get some context and in that spirit, here’s more from Michael Geist’s May 2, 2022 Law Bytes podcast,
… Despite recommendations from its own copyright review, students, teachers, librarians, and copyright experts to include a registration requirement [emphasis mine] for the additional 20 years of protection, the government chose to extend term without including protection to mitigate against the harms.
…
Geist’s podcast discussion with Guibault, where she explains what a ‘registration requirement’ is and how it would work plus more, runs for almost 27 mins. (May 2, 2022 Law Bytes podcast). One final comment, visual artists and musicians are also affected by copyright rules.
I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:
Sounds of science
It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.
This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.
Aliens on earth: machinic biology and/or biological machinery?
When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.
However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):
Are the aliens going to come from outer space or are we becoming the aliens?
Brains (biological and otherwise), AI, & our latest age of anxiety
As we integrate machines into our bodies, including our brains, there are new issues to consider:
Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting
I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.
Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.
On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,
“How AI-designed fiction reading lists and self-publishing help nurture far-right and neo-Nazi novelists” December 6, 2022 posting
While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,
“Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
“Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
“AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting
Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)
There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,
Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.
Opposite world (quantum physics in Canada)
Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:
“Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
“Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].
BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.
Ukraine, science, war, and unintended consequences
These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,
…
Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]
Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.
In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.
First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.
The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.
…
Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.
Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022
Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,
…
“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)
Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.
The project is led by Indigenous scholars and activists …
Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.
There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,
“Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting
The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,
“When poetry feels like colour, posture or birdsong plus some particle fiction” July 13, 2022 posting
“STEM (science, technology, engineering and math) brings life to the global hit television series “The Walking Dead” and a Canadian AI initiative for women and diversity” July 12, 2022 posting
Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.
… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:
A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.
Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.
If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,
“Inherited nanobionics” makes its debut
The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.
…
Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.
After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.
With that …
Making resolutions in the dark
Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.
Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.
For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).
I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?
My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”
He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.
Que sera, sera (Whatever will be, will be)
I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.
I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.
I’ve already featured these 2023 science events but just in case you missed them,
2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting
Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.
Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.
Measuring only 40 micrometres in diameter, researchers at DTU Physics have made the smallest record ever cut. Featuring the first 25 seconds of the Christmas classic “Rocking Around the Christmas Tree” [sic], the single is cut using a new nano-sculpting machine – the Nanofrazor – recently acquired from Heidelberg Instruments. The Nanofrazor can engrave 3D patterns into surfaces with nanoscale resolution, allowing the researchers to create new nanostructures that may pave the way for novel technologies in fields such as quantum devices, magnetic sensors and electron optics.
”I have done lithography for 30 years, and although we’ve had this machine for a while, it still feels like science fiction. We’ve done many experiments, like making a copy of the Mona Lisa in a 12 by 16-micrometre area with a pixel size of ten nanometers. We’ve also printed an image of DTU’s founder – Hans Christian Ørsted – in an 8 by 12-micrometre size with a pixel size of 2.540.000 DPI. To get an idea of the scale we are working at, we could write our signatures on a red blood cell with this thing,” says Professor Peter Bøggild from DTU Physics.
“The most radical thing is that we can create free-form 3D landscapes at that crazy resolution – this grey-scale nanolithography is a true game-changer for our research”.
…
The scientists show how they inscribed the song onto the world’s smallest record (Note 1: You will not hear the song. Note 2: i don’t know how many times I’ve seen news releases about audio files (a recorded song, fish singing, etc.) that are not included … sigh),
Nanoscale Christmas record – in stereo The Nanofrazor is not like a printer adding material to a medium; instead, it works like a CNC (computer numerical controle) machine removing material at precise locations, leaving the desired shape behind. In the case of the miniature pictures of Mona Lisa and H.C. Ørsted, the final image is defined by the line-by-line removal of polymer until a perfect grey-scale image emerges. To Peter Bøggild, an amateur musician and vinyl record enthusiast, the idea of cutting a nanoscale record was obvious.
“We decided that we might as well try and print a record. We’ve taken a snippet of Rocking Around The Christmas Tree and have cut it just like you would cut a normal record—although, since we’re working on the nanoscale, this one isn’t playable on your average turntable. The Nanofrazor was put to work as a record-cutting lathe – converting an audio signal into a spiralled groove on the surface of the medium. In this case, the medium is a different polymer than vinyl. We even encoded the music in stereo – the lateral wriggles is the left channel, whereas the depth modulation contains the right channel. It may be too impractical and expensive to become a hit record. To read the groove, you need a rather costly atomic force microscope or the Nanofrazor, but it is definitely doable.”
High-speed, low-cost nanostructures
The NOVO Foundation grant BIOMAG, which made the Nanofrazor dream possible, is not about cutting Christmas records or printing images of famous people. Peter Bøggild and his colleagues, Tim Booth and Nolan Lassaline, have other plans. They expect that the Nanofrazor will allow them to sculpt 3D nanostructures in extremely precise detail and do so at high speed and low cost – something that is impossible with existing tools.
“We work with 2D materials, and when these ultrathin materials are carefully laid down on the 3D landscapes, they follow the contours of the surface. In short, they curve, and that is a powerful and entirely new way of “programming” materials to do things that no one would believe were possible just fifteen years ago. For instance, when curved in just the right way, graphene behaves as if there is a giant magnetic field when there is, in fact, none. And we can curve it just the right way with the Nanofrazor,” says Peter Bøggild.
Associate professor Tim Booth adds:
“The fact that we can now accurately shape the surfaces with nanoscale precision at pretty much the speed of imagination is a game changer for us. We have many ideas for what to do next and believe that this machine will significantly speed up the prototyping of new structures. Our main goal is to develop novel magnetic sensors for detecting currents in the living brain within the BIOMAG project. Still, we also look forward to creating precisely sculpted potential landscapes with which we can better control electron waves. There is much work to do.”
Postdoc Nolan Lassaline (who cut the Christmas record), was recently awarded a DKK 2 Mio. VILLUM EXPERIMENT grant to create “quantum soap bubbles” in graphene. He will use the grant – and the Nanofrazor – to explore new ways of structuring nanomaterials and develop novel ways of manipulating electrons in atomically thin materials.
“Quantum soap bubbles are smooth electronic potentials where we add artificially tailored disorders. By doing so, we can manipulate how electrons flow in graphene. We hope to understand how electrons move in engineered disordered potentials and explore if this could become a new platform for advanced neural networks and quantum information processing.”
The Nanofrazor system is now part of the DTU Physics NANOMADE’s unique fabrication facility for air-sensitive 2D materials and devices and part of E-MAT, a greater ecosystem for air-sensitive nanomaterials processing and fabrication led by Prof. Nini Pryds, DTU Energy.
While it’s not an audio file from the smallest record, this features Brenda Lee (who first recorded the song in 1958) in a ‘singalong’ version of “Rockin’ Around the Christmas Tree,”
Bøggild was last featured here in a December 24, 2021 posting “Season’s Greetings with the world’s thinnest Christmas tree.”
Have a lovely Christmas/Winter Solstice/Kwanzaa/Hannukah/Saturnalia/??? celebration!
I’d forgotten how haunting a musical saw can sound,
An April 22, 2022 news item on Nanowerk announces research into the possibilities of a singing saw,
The eerie, ethereal sound of the singing saw has been a part of folk music traditions around the globe, from China to Appalachia, since the proliferation of cheap, flexible steel in the early 19th century. Made from bending a metal hand saw and bowing it like a cello, the instrument reached its heyday on the vaudeville stages of the early 20th century and has seen a resurgence thanks, in part, to social media.
As it turns out, the unique mathematical physics of the singing saw may hold the key to designing high quality resonators for a range of applications.
In a new paper, a team of researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Department of Physics used the singing saw to demonstrate how the geometry of a curved sheet, like curved metal, could be tuned to create high-quality, long-lasting oscillations for applications in sensing, nanoelectronics, photonics and more.
“Our research offers a robust principle to design high-quality resonators independent of scale and material, from macroscopic musical instruments to nanoscale devices, simply through a combination of geometry and topology,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics and senior author of the study.
…
While all musical instruments are acoustic resonators of a kind, none work quite like the singing saw.
“How the singing saw sings is based on a surprising effect,” said Petur Bryde, a graduate student at SEAS and co-first author of the paper. “When you strike a flat elastic sheet, such as a sheet of metal, the entire structure vibrates. The energy is quickly lost through the boundary where it is held, resulting in a dull sound that dissipates quickly. The same result is observed if you curve it into a J-shape. But, if you bend the sheet into an S-shape, you can make it vibrate in a very small area, which produces a clear, long-lasting tone.”
The geometry of the curved saw creates what musicians call the sweet spot and what physicists call localized vibrational modes — a confined area on the sheet which resonates without losing energy at the edges.
Importantly, the specific geometry of the S-curve doesn’t matter. It could be an S with a big curve at the top and a small curve at the bottom or visa versa.
“Musicians and researchers have known about this robust effect of geometry for some time, but the underlying mechanisms have remained a mystery,” said Suraj Shankar, a Harvard Junior Fellow in Physics and SEAS and co-first author of the study. “We found a mathematical argument that explains how and why this robust effect exists with any shape within this class, so that the details of the shape are unimportant, and the only fact that matters is that there is a reversal of curvature along the saw.”
Shankar, Bryde and Mahadevan found that explanation via an analogy to very different class of physical systems — topological insulators. Most often associated with quantum physics, topological insulators are materials that conduct electricity in their surface or edge but not in the middle and no matter how you cut these materials, they will always conduct on their edges.
“In this work, we drew a mathematical analogy between the acoustics of bent sheets and these quantum and electronic systems,” said Shankar.
By using the mathematics of topological systems, the researchers found that the localized vibrational modes in the sweet spot of singing saw were governed by a topological parameter that can be computed and which relies on nothing more than the existence of two opposite curves in the material. The sweet spot then behaves like an internal “edge” in the saw.
“By using experiments, theoretical and numerical analysis, we showed that the S-curvature in a thin shell can localize topologically-protected modes at the ‘sweet spot’ or inflection line, similar to exotic edge states in topological insulators,” said Bryde. “This phenomenon is material independent, meaning it will appear in steel, glass or even graphene.”
The researchers also found that they could tune the localization of the mode by changing the shape of the S-curve, which is important in applications such as sensing, where you need a resonator that is tuned to very specific frequencies.
Next, the researchers aim to explore localized modes in doubly curved structures, such as bells and other shapes.
…
Here’s a link to and a citation for the paper,
Geometric control of topological dynamics in a singing saw by Suraj Shankar, Petur Bryde, and L. Mahadevan. The Proceedings of the National Academy of Sciences (PNAS) April 21, 2022 | 119 (17) e2117241119 DOI: https://doi.org/10.1073/pnas.2117241119
This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.
Music + AI Reading Group @ Mila x Vector Institute
Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),
Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting.
Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.
Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,
…
unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨
Dear members of the ISMIR [International Society for Music Information Retrieval] Community,
Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning.
During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to : – Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute. – People share research ideas and brainstorm with others. – Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.
Our topics of interest cover (beware : the list is not exhaustive !) : 🎹 Music Generation 🧠 Music Understanding 📇 Music Recommendation 🗣 Source Separation and Instrument Recognition 🎛 Acoustics 🗿 Digital Humanities … 🙌 … and more (we are waiting for you :]) !
— If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group. —
Bravo to the two student organizers for putting this together!
Calliope Composition Environment for music makers
From the August 10, 2022 Metacreation Lab announcement,
Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and .
Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.
The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.
You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.
3rd Conference on AI Music Creativity (AIMC 2022)
This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,
Registration has opened for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events.
The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.
Autolume Live
This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”
Here’s more from the August 10, 2022 Metacreation Lab announcement,
Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals.
While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.
As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,
Due to the 2020-2022 situation surrounding COVID-19, we were unable to use our system to accompany live performances. We have used different iterations of Autolume-Live to create two installations. We recorded some curated sessions and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show that the system on its own is already able to generate interesting and responsive visuals for a musical piece.
For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.
…
I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),
…
The artwork is called ‘Autolume Acedia’.
“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”
Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.
…
These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]
You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.
Movement and the Metacreation Lab
Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,
…
As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.
…
While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,
MITACS Accelerate award – partnership with Kinetyx
We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award.
The project will focus on body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.
Movement Database – MoDa
On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi.
Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.
MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.
As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),
Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.
…
We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.
…
We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change.
[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.
[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)
“… make movement mean something … .” Really?
The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,
An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
Social justice
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]
From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
…
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
…
Eeek
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Project Description
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
…
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
…
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
…
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
…
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
…
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
…
If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*
You can’t always get what you want
My friend,
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
US-centric
My friend,
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
…
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]
…
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].
…
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.
Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.
In the end
It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.
July 27, 2022, the VAG held a virtual event with an artist,
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
…
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.