Category Archives: Music

New podcast—Mission: Interplanetary and Event Rap: a one-stop custom rap shop Kickstarter

I received two email notices recently, one from Dr. Andrew Maynard (Arizona State University; ASU) and one from Baba Brinkman (Canadian rapper of science and other topics now based in New York).

Mission: Interplanetary

I found a “Mission: Interplanetary— a podcast on the future of humans as a spacefaring species!” webpage (Link: https://collegeofglobalfutures.asu.edu/blog/2021/03/23/mission-interplanetary-redefining-how-we-talk-about-humans-in-space/) on the Arizona State University College of Global Futures website,

Back in January 2019 I got an email from my good friend and colleague Lance Gharavi with the title “Podcast brainstorming.” Two years on, we’ve just launched the Mission: Interplanetary podcast–and it’s amazing!

It’s been a long journey — especially with a global pandemic thrown in along the way — but on March 23 [2021], the Mission: Interplanetary podcast with Slate and ASU finally launched.

After two years of planning, many discussions, a bunch dry runs, and lots (and by that I mean lots) of Zoom meetings, we are live!

As the team behind the podcast talked about and developed the ideas underpinning the Mission: Interplanetary,we were interested in exploring new ways of thinking and talking about the future of humanity as a space-faring species as part of Arizona State University’s Interplanetary Initiative. We also wanted to go big with these conversations — really big!

And that is exactly what we’ve done in this partnership with Slate.

The guests we’re hosting, the conversations we have lined up, the issues we grapple with, are all literally out of this world. But don’t just take my word for it — listen to the first episode above with the incredible Lindy Elkins-Tanton talking about NASA’s mission to the asteroid 16 Psyche.

And this is just a taste of what’s to come over the next few weeks as we talk to an amazing lineup of guests.

So if you’re looking for a space podcast with a difference, and one that grapples with big questions around our space-based future, please do subscribe on your favorite podcast platform. And join me and the fabulous former NASA astronaut Cady Coleman as we explore the future of humanity in space.

See you there!

Slate’s webpage (Mission: Interplanetary; Link: https://slate.com/podcasts/mission-interplanetary) offers more details about the co-hosts and the programmes along with embedded podcasts,

Cady Coleman is a former NASA astronaut and Air Force colonel. She flew aboard the International Space Station on a six-month expedition as the lead science and robotics officer. A frequent speaker on space and STEM topics, Coleman is also a musician who’s played from space with the Chieftains and Ian Anderson of Jethro Tull.

Andrew Maynard is a scientist, author, and expert in risk innovation. His books include Films From the Future: The Technology and Morality of Sci-Fi Movies and Future Rising

Latest Episodes

April 27, 2021

Murder in Space

What laws govern us when we leave Earth?

Happy listening. And, I apologize for the awkward links.

Event Rap Kickstarter

Baba Brinkman’s April 27, 2021 email notice has this to say about his latest venture,

Join the Movement, Get Rewards

My new Kickstarter campaign for Event Rap is live as of right now! Anyone who backs the project is helping to launch an exciting new company, actually a new kind of company, the first creator marketplace for rappers. Please take a few minutes to read the campaign description, I put a lot of love into it.

The campaign goal is to raise $26K in 30 days, an average of $2K per artist participating. If we succeed, this platform becomes a new income stream for independent artists during the pandemic and beyond. That’s the vision, and I’m asking for your help to share it and support it.

But instead of why it matters, let’s talk about what you get if you support the campaign!

$10-$50 gets you an advance copy of my new science rap album, Bright Future. I’m extremely proud of this record, which you can preview here, and Bright Future is also a prototype for Event Rap, since all ten of the songs were commissioned by people like you.

$250 – $500 gets you a Custom Rap Video written and produced by one of our artists, and you have twelve artists and infinite topics to choose from. This is an insanely low starting price for an original rap video from a seasoned professional, and it applies only during the Kickstarter. What can the video be about? Anything at all. You choose!

In case it’s helpful, here’s a guide I wrote entitled “How to Brief a Rapper

$750 – $1,500 gets you a live rap performance at your virtual event. This is also an amazingly low price, especially since you can choose to have the artist freestyle interactively with your audience, write and perform a custom rap live, or best of all compose a “Rap Up” summary of the event, written during the event, that the artist will perform as the grand finale.

That’s about as fresh and fun as rap gets.

$3,000 – $5,000 the highest tiers bring the highest quality, a brand new custom-written, recorded, mixed and mastered studio track, or studio track plus full rap music video, with an exclusive beat and lyrics that amplify your message in the impactful, entertaining way that rap does best.

I know this higher price range isn’t for everyone, but check out some of the music videos our artists have made, and maybe you can think of a friend to send this to who has a budget and a worthy cause.

Okay, that’s it!

Those prices are in US dollars.

I gather at least one person has backed given enough money to request a custom rap on cycling culture in the Netherlands.

The campaign runs for another 26 days. It has amassed over $8,400 CAD towards a goal of $32,008 CAD. (The site doesn’t show me the goal in USD although the pledges/reward are listed in that currency.)

AI (Audeo) uses visual cues to play the right music

A February 4, 2021 news item on ScienceDaily highlights research from the University of Washington (state) about artificial intelligence, piano playing, and Audeo,

Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.

A University of Washington team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.

The researchers presented Audeo Dec. 8 [2020] at the NeurIPS 2020 conference.

A February 4, 2021 University of Washington news release (also on EurekAlert), which originated the news item, offers more detail,

“To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author Eli Shlizerman, an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”

Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.

“If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”

The researchers trained and tested the system using YouTube videos of the pianist Paul Barton. The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as Scott Joplin.

Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.

“Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”

Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.

“The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”

The researchers have created videos featuring the live pianist and the AI pianist, which you will find embedded in the February 4, 2021 University of Washington news release.

Here’s a link to and a citation for the researchers’ paper,

Audeo: Generating music just from a video of pianist movements by Kun Su, Xiulong Liu, and E. Shlizerman. http://faculty.washington.edu/shlizee/audeo/?_ga=2.11972724.1912597934.1613414721-714686724.1612482256 (I had some difficulty creating a link and ended up with this unwieldy open access (?) version.)

The paper also appears in the proceedings for Advances in Neural Information Processing Systems 33 (NeurIPS 2020) Edited by: H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin. I had to scroll through many papers and all I found for ‘Audeo’ was an abstract.

It’s about sound: (1) Earable computing? (2) The end of the cacophony in hospitals?

I have two items, both concerning sound but in very different ways.

Phones in your ears

Researchers at the University of Illinois are working on smartphones you can put in your ears like you do an earbud. The work is in its very earliest stages as they are trying to establish a new field of research. There is a proposed timeline,

Caption: Earable computing timeline, according to SyNRG. Credit: Romit Roy Choudhury, The Grainger College of Engineering

Here’s more from a December 2, 2020 University of Illinois Grainger College of Engineering news release (also on EurekAlert but published on December 15, 2020),

CSL’s [Coordinated Science Laboratory] Systems and Networking Research Group (SyNRG) is defining a new sub-area of mobile technology that they call “earable computing.” The team believes that earphones will be the next significant milestone in wearable devices, and that new hardware, software, and apps will all run on this platform.

“The leap from today’s earphones to ‘earables’ would mimic the transformation that we had seen from basic phones to smartphones,” said Romit Roy Choudhury, professor in electrical and computer engineering (ECE). “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”

Instead, the group believes tomorrow’s earphones will continuously sense human behavior, run acoustic augmented reality, have Alexa and Siri whisper just-in-time information, track user motion and health, and offer seamless security, among many other capabilities.

The research questions that underlie earable computing draw from a wide range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.

Computer science PhD student Zhijian Yang and other members of the SyNRG group, including his fellow students Yu-Lin Wei and Liz Li, are leading the way. They have published a series of papers in this area, starting with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earables research: facial motion sensing, acoustic augmented reality, and voice localization for earphones.

In Ear-AR: Indoor Acoustic Augmented Reality on Earphones, the group looks at how smart earphone sensors can track human movement, and, depending on the user’s location, play 3D sounds in the ear.

“If you want to find a store in a mall,” says Zhijian, “the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me.’ In your ears, the sound would appear to come from the direction in which you should walk, as if it’s a voice escort.”

The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones. Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look into analyzing facial muscle movements and emotions with earphone sensors.

The third publication, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to detect the direction of a sound. This means that if Alice and Bob are having a conversation, Bob’s earphones would be able to tune into the direction Alice’s voice is coming from.

“We’ve been working on mobile sensing and computing for 10 years,” said Wei. “We have a lot of experience to define this emerging landscape of earable computing.”

Haitham Hassanieh, assistant professor in ECE, is also involved in this research. The team has been funded by both NSF [US National Science Foundation] and NIH [National Institutes of Health], as well as companies like Nokia and Google. See more at the group’s Earable Computing website.

Noise hurts hospital caregivers and patients

A December 11, 2020 Canadian Broadcasting Corporation (CBC) article features one of the corporation’s Day 6 Radio programmes. This one was about proposed sound design in hospitals as imagined by musician Yoko Sen and, on a converging track, professor of applied psychology, Judy Edworthy,

As an ambient electronic musician, Yoko Sen spends much of her time building intricate, soothing soundscapes. 

But when she was hospitalized in 2012, she found herself immersed in a very different sound environment.

Already panicked by her health condition, she couldn’t tune out the harsh tones of the medical machinery in her hospital room.

Instead, she zeroed in on two machines — a patient monitor and a bed fall alarm. Their piercing tones had blended together to create a diminished fifth, a musical interval so offensive that it was banned in medieval churches.

Sen went on to start Sen Sound, a Washington, D.C.-based social enterprise dedicated to improving the sound of hospitals. 

‘Alarms are ignored, missed’

The volume of noise in today’s hospitals isn’t just unpleasant. It can also put patients’ health at risk.

According to Judy Edworthy, a professor of applied psychology at the University of Plymouth, the sheer number of alarms going off each day can spark a sort of auditory burnout among doctors and nurses.

“Alarms are ignored, missed, or generally just not paid attention to,” says Edworthy.

In a hospital environment that’s also inundated with announcements from overhead speakers, ringing phones, trolleys, and all other manner of insidious background sound, it can be difficult for staff to accurately locate and differentiate between the alarms.

Worse yet, in many hospitals, many of the alarms ringing out across the ward are false. Studies have shown that as many as 99 per cent of clinical alarms are inaccurate

The resulting problem has become so widespread that a term has been coined to describe it: alarm fatigue.

Raising the alarm

Sen’s company, launched in 2016, has partnered with hospitals and design incubators and even collaborates directly with medical device companies seeking to redesign their alarms.

Over the years, Sen has interviewed countless patients and hospital staff who share her frustration with noise. 

But when she first sat down with the engineers responsible for the devices’ design, she found that they tended to treat the sound of their devices as an “afterthought.” 

“When people first started to develop medical devices … people thought it was a good idea to have one or two sounds to demonstrate or to indicate when, let’s say for example, the patient’s temperature … exceeded some kind of range,” she [Edworthy] said.

“There wasn’t really any design put into this; it was just a sound that people thought would get your attention by being very loud and aversive and so on.”

Edworthy, who has spent decades studying medical alarm design, took things one step further this summer. In July, the International Standards Organization approved a new set of alarm designs, created by Edworthy, that mimic the natural hospital environment.

The standards, which are accepted by Health Canada, include an electronic heartbeat sound for alarms related to cardiac issues; and a rattling pillbox for drug administration. 

Her [Sen’s] team continues to work with companies to improve the sound of existing medical devices. But she has also begun to think more deeply about the long-term future of hospital sound — especially as it relates to the end-of-life experience.

“A study shows that hearing can be the last sense to go when we die,” says Sen. 

“It’s really beyond upsetting to think that many people end up dying in acute care hospitals and there are all these medical devices.”

As part of her interviews with patients, Sen has asked what sounds they would most like to hear at the end of their lives — and she discovered a common theme in their responses.

“I asked this question in many different countries, but they are all sounds that symbolize life,” said Sen. 

“Sounds of nature, sound of water, voices of loved ones. It’s all the sounds of life that people say they want to hear.”

As the pandemic continues to affect hospitals around the world, those efforts have taken on a new resonance — and Sen hopes the current crisis might serve as an opportunity to help usher in a more healing soundscape.

“My own health crisis almost gave me a new pathway in life,” she said.

There’s an embedded audio file from the Day 6 radio programme featuring this material on sound design and hospitals and there’s an embedded video of Sen delivery a talk about her work in the December 11, 2020 Canadian Broadcasting Corporation (CBC) article.

Sen and Edworthy give hope for a less noisy future and better care in hospitals for the living and the dying. Here’s a link to Sen Sound.

Toronto’s ArtSci Salon and its Kaleidoscopic Imaginations on Oct 27, 2020 – 7:30 pm (EDT)

The ArtSci Salon is getting quite active these days. Here’s the latest from an Oct. 22, 2020 ArtSci Salon announcement (received via email), which can also be viewed on their Kaleidoscope event page,

Kaleidoscopic Imaginations

Performing togetherness in empty spaces

An experimental  collaboration between the ArtSci Salon, the Digital Dramaturgy Lab_squared/ DDL2 and Sensorium: Centre for Digital Arts and Technology, York University (Toronto, Ontario, Canada)

Tuesday, October 27, 2020

7:30 pm [EDT]

Join our evening of live-streamed, multi-media  performances, following a kaleidoscopic dramaturgy of complexity discourses as inspired by computational complexity theory gatherings.

We are presenting installations, site-specific artistic interventions and media experiments, featuring networked audio and video, dance and performances as we repopulate spaces – The Fields Institute and surroundings – forced to lie empty due to the pandemic. Respecting physical distance and new sanitation and safety rules can be challenging, but it can also open up new ideas and opportunities.

NOTE: DDL2  contributions to this event are sourced or inspired by their recent kaleidoscopic performance “Rattling the the Curve – Paradoxical ECODATA performances of A/I (artistic intelligence), and facial recognition of humans and trees

Virtual space/live streaming concept and design: DDL2  Antje Budde, Karyn McCallum and Don Sinclair

Virtual space and streaming pilot: Don Sinclair

Here are specific programme details (from the announcement),

  1. Signing the Virus – Video (2 min.)
    Collaborators: DDL2 Antje Budde, Felipe Cervera, Grace Whiskin
  2. Niimi II – – Performance and outdoor video projection (15 min.)
    (Nimii means in Anishinaabemowin: s/he dances) Collaborators: DDL2 Candy Blair, Antje Budde, Jill Carter, Lars Crosby, Nina Czegledy, Dave Kemp
  3. Oracle Jane (Scene 2) – A partial playreading on the politics of AI (30 min.)
    Playwright: DDL2 Oracle Collaborators: DDL2 Antje Budde, Frans Robinow, George Bwannika Seremba, Amy Wong and AI ethics consultant Vicki Zhang
  4. Vriksha/Tree – Dance video and outdoor projection (8 min.)
    Collaborators: DDL2 Antje Budde, Lars Crosby, Astad Deboo, Dave Kemp, Amit Kumar
  5. Facial Recognition – Performing a Plate Camera from a Distance (3 min.)
    Collaborators: DDL2 Antje Budde, Jill Carter, Felipe Cervera, Nina Czegledy, Karyn McCallum, Lars Crosby, Martin Kulinna, Montgomery C. Martin, George Bwanika Seremba, Don Sinclair, Heike Sommer
  6. Cutting Edge – Growing Data (6 min.)
    DDL2 A performance by Antje Budde
  7. “void * ambience” – Architectural and instrumental acoustics, projection mapping Concept: Sensorium: The Centre for Digital Art and Technology, York University Collaborators: Michael Palumbo, Ilze Briede [Kavi], Debashis Sinha, Joel Ong

This performance is part of a series (from the announcement),

These three performances are part of Boundary-Crossings: Multiscalar Entanglements in Art, Science and Society, a public Outreach program supported by the Fiends [sic] Institute for Research in Mathematical Science. Boundary Crossings is a series exploring how the notion of boundaries can be transcended and dissolved in the arts and the humanities, the biological and the mathematical sciences, as well as human geography and political economy. Boundaries are used to establish delimitations among disciplines; to discriminate between the human and the non-human (body and technologies, body and bacteria); and to indicate physical and/or artificial boundaries, separating geographical areas and nation states. Our goal is to cross these boundaries by proposing new narratives to show how the distinctions, and the barriers that science, technology, society and the state have created can in fact be re-interpreted as porous and woven together.

This event is curated and produced by ArtSci Salon; Digital Dramaturgy Lab_squared/ DDL2; Sensorium: Centre for Digital Arts and Technology, York University; and Ryerson University; it is supported by The Fields Institute for Research in Mathematical Sciences

Streaming Link 

Finally, the announcement includes biographical information about all of the ‘boundary-crossers’,

Candy Blair (Tkaron:to/Toronto)
Candy Blair/Otsίkh:èta (they/them) is a mixed First Nations/European,
2-spirit interdisciplinary visual and performing artist from Tio’tía:ke – where the group split (“Montreal”) in Québec.

While continuing their work as an artist they also finished their Creative Arts, Literature, and Languages program at Marianopolis College (cégep), their 1st year in the Theatre program at York University, and their 3rd year Acting Conservatory Program at the Centre For Indigenous Theatre in Tsí Tkaròn:to – Where the trees stand in water (Toronto”).

Some of Candy’s noteable performances are Jill Carter’s Encounters at the Edge of the Woods, exploring a range of issues with colonization; Ange Loft’s project Talking Treaties, discussing the treaties of the “Toronto” purchase; Cheri Maracle’s The Story of Six Nations, exploring Six Nation’s origin story through dance/combat choreography, and several other performances, exploring various topics around Indigenous language, land, and cultural restoration through various mediums such as dance,
modelling, painting, theatre, directing, song, etc. As an activist and soon to be entrepreneur, Candy also enjoys teaching workshops around promoting Indigenous resurgence such as Indigenous hand drumming, food sovereignty, beading, medicine knowledge, etc..

Working with their collectives like Weave and Mend, they were responsible for the design, land purification, and installation process of the four medicine plots and a community space with their 3 other members. Candy aspires to continue exploring ways of decolonization through healthy traditional practices from their mixed background and the arts in the hopes of eventually supporting Indigenous relations
worldwide.

Antje Budde
Antje Budde is a conceptual, queer-feminist, interdisciplinary experimental scholar-artist and an Associate Professor of Theatre Studies, Cultural Communication and Modern Chinese Studies at the Centre for Drama, Theatre and Performance Studies, University of Toronto. Antje has created multi-disciplinary artistic works in Germany, China and Canada and works tri-lingually in German, English and Mandarin. She is the founder of a number of queerly feminist performing art projects including most recently the (DDL)2 or (Digital Dramaturgy Lab)Squared – a platform for experimental explorations of digital culture, creative labor, integration of arts and science, and technology in performance. She is interested in the intersections of natural sciences, the arts, engineering and computer science.

Roberta Buiani
Roberta Buiani (MA; PhD York University) is the Artistic Director of the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences (Toronto). Her artistic work has travelled to art festivals (Transmediale; Hemispheric Institute Encuentro; Brazil), community centres and galleries (the Free Gallery Toronto; Immigrant Movement
International, Queens, Myseum of Toronto), and science institutions (RPI; the Fields Institute). Her writing has appeared on Space and Culture, Cultural Studies and The Canadian Journal of Communication_among others. With the ArtSci Salon she has launched a series of experiments in “squatting academia”, by re-populating abandoned spaces and cabinets across university campuses with SciArt installations.

Currently, she is a research associate at the Centre for Feminist Research and a Scholar in Residence at Sensorium: Centre for Digital Arts and Technology at York University [Toronto, Ontario, Canada].

Jill Carter (Tkaron:to/ Toronto)
Jill (Anishinaabe/Ashkenazi) is a theatre practitioner and researcher, currently cross appointed to the Centre for Drama, Theatre and Performance Studies; the Transitional Year Programme; and Indigenous Studies at the University of Toronto. She works with many members of Tkaron:to’s Indigenous theatre community to support the development of new works and to disseminate artistic objectives, process, and outcomes through community- driven research projects. Her scholarly research,
creative projects, and activism are built upon ongoing relationships with Indigenous Elders, Artists and Activists, positioning her as witness to, participant in, and disseminator of oral histories that speak to the application of Indigenous aesthetic principles and traditional knowledge systems to contemporary performance.The research questions she pursues revolve around the mechanics of story creation,
the processes of delivery and the manufacture of affect.

More recently, she has concentrated upon Indigenous pedagogical models for the rehearsal studio and the lecture hall; the application of Indigenous [insurgent] research methods within performance studies; the politics of land acknowledgements; and land – based dramaturgies/activations/interventions.

Jill also works as a researcher and tour guide with First Story Toronto; facilitates Land Acknowledgement, Devising, and Land-based Dramaturgy Workshops for theatre makers in this city; and performs with the Talking Treaties Collective (Jumblies Theatre, Toronto).

In September 2019, Jill directed Encounters at the Edge of the Woods. This was a devised show, featuring Indigenous and Settler voices, and it opened Hart House Theatre’s 100th season; it is the first instance of Indigenous presence on Hart House Theatre’s stage in its 100 years of existence as the cradle for Canadian theatre.

Nina Czegledy
(Toronto) artist, curator, educator, works internationally on collaborative art, science & technology projects. The changing perception of the human body and its environment as well as paradigm shifts in the arts inform her projects. She has exhibited and published widely, won awards for her artwork and has initiated, lead and participated in workshops, forums and festivals worldwide at international events.

Astad Deboo (Mumbai, India)
Astad Deboo is a contemporary dancer and choreographer who employs his
training in Indian classical dance forms of Kathak as well as Kathakali to create a dance form that is unique to him. He has become a pioneer of modern dance in India. Astad describes his style as “contemporary in vocabulary and traditional in restraints.” Throughout his long and illustrious career, he has worked with various prominent performers such as Pina Bausch, Alis on Becker Chase and Pink Floyd and performed in many parts of the world. He has been awarded the Sangeet Natak Akademi Award (1996) and Padma Shri (2007), awarded by the Government of India. In January 2005 along with 12 young women with hearing impairment supported by the Astad Deboo Dance Foundation, he performed at the 20th Annual Deaf Olympics at Melbourne, Australia. Astad has a long record of working with disadvantaged youth.

Ilze Briede [Kavi]
Ilze Briede [artist name: Kavi] is a Latvian/Canadian artist and researcher with broad and diverse interests. Her artistic practice, a hybrid of video, image and object making, investigates the phenomenon of perception and the constraints and boundaries between the senses and knowing. Kavi is currently pursuing a PhD degree in Digital Media at York University with a research focus on computational creativity and generative art. She sees computer-generated systems and algorithms as a potentiality for co-creation and collaboration between human and machine. Kavi has previously worked and exhibited with Fashion Art Toronto, Kensington Market Art Fair, Toronto Burlesque Festival, Nuit Blanche, Sidewalk Toronto and the Toronto Symphony Orchestra.

Dave Kemp
Dave Kemp is a visual artist whose practice looks at the intersections and interactions between art, science and technology: particularly at how these fields shape our perception and understanding of the world. His artworks have been exhibited widely at venues such as at the McIntosh Gallery, The Agnes Etherington Art Centre, Art Gallery of Mississauga, The Ontario Science Centre, York Quay Gallery, Interaccess,
Modern Fuel Artist-Run Centre, and as part of the Switch video festival in Nenagh, Ireland. His works are also included in the permanent collections of the Agnes Etherington Art Centre and the Canada Council Art Bank.

Stephen Morris
Stephen Morris is Professor of experimental non-linear Physics in the faculty of Physics at the University of Toronto. He is the scientific Director of the ArtSci salon at the Fields Institute for Research in Mathematical Sciences. He often collaborates with artists and has himself performed and produced art involving his own scientific instruments and experiments in non-linear physics and pattern formation

Michael Palumbo
Michael Palumbo (MA, BFA) is an electroacoustic music improviser, coder, and researcher. His PhD research spans distributed creativity and version control systems, and is expressed through “git show”, a distributed electroacoustic music composition and design experiment, and “Mischmasch”, a collaborative modular synthesizer in virtual reality. He studies with Dr. Doug Van Nort as a researcher in the Distributed
Performance and Sensorial Immersion Lab, and Dr. Graham Wakefield at the Alice Lab for Computational Worldmaking. His works have been presented internationally, including at ISEA, AES, NIME, Expo ’74, TIES, and the Network Music Festival. He performs regularly with a modular synthesizer, runs the Exit Points electroacoustic improvisation series, and is an enthusiastic gardener and yoga practitioner.

Joel Ong (PhD. Digital Arts and Experimental Media (DXARTS, University
of Washington)

Joel Ong is a media artist whose works connect scientific and artistic approaches to the environment, particularly with respect to sound and physical space.  Professor Ong’s work explores the way objects and spaces can function as repositories of ‘frozen sound’, and in elucidating these, he is interested in creating what systems theorist Jack Burnham (1968) refers to as “art (that) does not reside in material entities, but in relations between people and between people and the components of their environment”.

A serial collaborator, Professor Ong is invested in the broader scope of Art-Science collaborations and is engaged constantly in the discourses and processes that facilitate viewing these two polemical disciplines on similar ground.  His graduate interdisciplinary work in nanotechnology and sound was conducted at SymbioticA, the Center of Excellence for Biological Arts at the University of Western Australia and supervised by BioArt pioneers and TCA (The Tissue Culture and Art Project) artists Dr Ionat Zurr and Oron Catts.

George Bwanika Seremba
George Bwanika Seremba,is an actor, playwright and scholar. He was born
in Uganda. George holds an M. Phil, and a Ph.D. in Theatre Studies, from Trinity
College Dublin. In 1980, having barely survived a botched execution by the Military Intelligence, he fled into exile, resettling in Canada (1983). He has performed in numerous plays including in his own, “Come Good Rain”, which was awarded a Dora award (1993). In addition, he published a number of edited play collections including “Beyond the pale: dramatic writing from First Nations writers & writers of colour” co-edited by Yvette Nolan, Betty Quan, George Bwanika Seremba. (1996).

George was nominated for the Irish Times’ Best Actor award in Dublin’s Calypso Theatre’s for his role in Athol Fugard’s “Master Harold and the boys”. In addition to theatre he performed in several movies and on television. His doctoral thesis (2008) entitled “Robert Serumaga and the Golden Age of Uganda’s Theatre (1968-1978): (Solipsism, Activism, Innovation)” will be published as a monograph by CSP (U.K) in 2021.

Don Sinclair (Toronto)
Don is Associate Professor in the Department of Computational Arts at York University. His creative research areas include interactive performance, projections for dance, sound art, web and data art, cycling art, sustainability, and choral singing most often using code and programming. Don is particularly interested in processes of artistic creation that integrate digital creative coding-based practices with performance in dance and theatre. As well, he is an enthusiastic cyclist.

Debashis Sinha
Driven by a deep commitment to the primacy of sound in creative expression, Debashis Sinha has realized projects in radiophonic art, music, sound art, audiovisual performance, theatre, dance, and music across Canada and internationally. Sound design and composition credits include numerous works for Peggy Baker Dance Projects and productions with Canada’s premiere theatre companies including The Stratford Festival, Soulpepper, Volcano Theatre, Young People’s Theatre, Project Humanity, The Theatre Centre, Nightwood Theatre, Why Not Theatre, MTC Warehouse and Necessary Angel. His live sound practice on the concert stage has led to appearances at MUTEK Montreal, MUTEK Japan, the Guelph Jazz Festival, the Banff Centre, The Music Gallery, and other venues. Sinha teaches sound design at York University and the National Theatre School, and is currently working on a multi-part audio/performance work incorporating machine learning and AI funded by the Canada Council for the Arts.

Vicki (Jingjing) Zhang (Toronto)
Vicki Zhang is a faculty member at University of Toronto’s statistics department. She is the author of Uncalculated Risks (Canadian Scholar’s Press, 2014). She is also a playwright, whose plays have been produced or stage read in various festivals and venues in Canada including Toronto’s New Ideas Festival, Winnipeg’s FemFest, Hamilton Fringe Festival, Ergo Pink Fest, InspiraTO festival, Toronto’s Festival of Original Theatre (FOOT), Asper Center for Theatre and Film, Canadian Museum for Human Rights, Cultural Pluralism in the Arts Movement Ontario (CPAMO), and the Canadian Play Thing. She has also written essays and short fiction for Rookie Magazine and Thread.

If you can’t attend this Oct. 27, 2020 event, there’s still the Oct. 29, 2020 Boundary-Crossings event: Beauty Kit (see my Oct. 12, 2020 posting for more).

As for Kaleidoscopic Imaginations, you can access the Streaming Link On Oct. 27, 2020 at 7:30 pm EDT (4 pm PDT).

Music for Incandescent Events: Skyview, Here (version 4) 25 October – 31 October 2020

This October 20, 2020 notice from Toronto’s ArtSci Salon (received via email) features a DIY musical event for dawn and dusk from Oct. 25 – 31, 2020 and it is a Canada-wide event series,

Dear media-arts and music organizations, arts educators & adventurous
radio programmers, kindly distribute this invitation to your members,
students, audiences and colleagues.

You’re invited to a free week-long dawn & dusk audio-viewing event
at a location of your choice:

Sunday October 25 – Saturday October 31

_ Music for Incandescent Events : Skyview, Here (version 4)_
Audio for skyscapes around sunset and sunrise. Livestream &
downloadable for portable sky-viewing adventures_ (variable times).
_
_ By Sarah Peebles. _
Presented by the Canadian Music Centre Ontario Chapter.

Day 1 event page, information & schedule overview – CMC

https://on.cmccanada.org/event/music-for-incandescent-events/

CMC Calendar with day by day links to each day’s event page

https://on.cmccanada.org/events/

Special thanks to CMC – Ontario & Mattew Fava for presenting and hosting this installation.

I hope you enjoy the experience!

I found more information on the event, which clarifies how people in Ontario and how people in the rest of Canada can participate in the Canadian Music Centre’s latest Incandescent Event,

South-Western Ontario, online | We invite audiences to tune-in during scheduled audio streams on Facebook during the week of October 25 occurring roughly at dawn and dusk for those based in South-Western Ontario. Streaming will coincide with the shifting light around sunrise/sunset within a broad zone ranging approximately from Peterborough in the East, to London in the West, and Barrie in the North. Tune in as the sky begins to change colour.

Across Canada, offline | Audiences are also invited to download the dawn-dusk audio files for this work in order to listen offline at a self-directed time based on their location. We encourage you to creatively locate yourself off-line via bicycle, ferry, boat, walking, or driving with your portable listening device.

Instructions for experiencing the pieces | Place yourself with a skyscape view of your choice, indoors or outdoors. Adjust your soundscape to your liking (e.g. open windows, sit under a tree, near waves or find a reflective surface, etc.). Listen online live or via audiofile (download) during sunset and/or sunrise, using good quality loudspeakers, headphones or earbuds.

About the Piece |Music for Incandescent Events meditates our perception of time, memory and place, creating a space for contemplation, for awareness of one’s physical environment, and for exploration of consciousness in the moment.

Each iteration of Incandescent Events combines different improvised short melodies and tones performed on a slightly de-tuned shô (Japanese mouth-organ), re-recorded several octaves lower than original pitch. I recorded these melodies at very close range while sitting near a reflective wall, catching rich beat patterns and sum/difference tones. These and additional frequencies beyond the range of human hearing transform into unexpected, complex audio events at slower play-back speeds, several octaves down.

Music for Incandescent Events version 1 (2002) is published on Somethings #1 (Last Visible Dog). Version 2 (installation) was commissioned by wade Collective for wade project in June, 2004, for Gibraltar Point, Toronto Islands; it showed at the McLuhan International Festival of the Future, “Scanning Nature” exhibition October, 2004 (DeLeon White Gallery rooftop deck. Version no. 3, Colour Temperature Event (for Gabriola Island, Berry Point, 2017), was curated for The QR Anthology

You can find the schedule for the streaming events (Oct. 25 -31, 2020) and a link to the music downloads at the Canadian Music Centre’s Music for Incandescent Events: Skyview, Here (version 4) event page.

Concerns about Zoom? Call for expressions of interest in “Zoom Obscura,” creative interventions for a data ethics of video conferencing

Have you wondered about Zoom video conferencing and all that data being made available? Perhaps questioned ethical issues in addition to those associated with data security? Is so and you’d like to come up with a creative intervention that delves beyond encryption issues, there’s Zoom Obscura (on the creativeinformatics.org website),

CI [Creative Informatics] researchers Pip Thornton, Chris Elsden and Chris Speed were recently awarded funding from the Human Data Interaction Network (HDI +) Ethics & Data competition. Collaborating with researchers from Durham [Durham University] and KCL [Kings College London], the Zoom Obscura project aims to investigate creative interventions for a data ethics of video conferencing beyond encryption.

The COVID-19 pandemic has gifted video conferencing companies, such as Zoom, with a vast amount of economically valuable and sensitive data such as our facial and voice biometrics, backgrounds and chat scripts. Before the pandemic, this ‘new normal’ would be subject to scrutiny, scepticism and critique. Yet, the urgent need for remote working and socialising left us with little choice but to engage with these potentially exploitative platforms.

While much of the narrative around data security revolves around technological ‘solutions’ such as encryption, we think there are other – more creative – ways to push back against the systems of digital capitalism that continue to encroach on our everyday lives.

As part of this HDI-funded project, we seek artists, hackers and creative technologists who are interested in experimenting with creative methods to join us in a series of online workshops that will explore how to restore some control and agency in how we can be seen and heard in these newly ubiquitous online spaces. Through three half-day workshops held remotely, we will bring artists and technicians together to ideate, prototype, and exhibit various interventions into the rapidly normalising culture of video-calling in ways that do not compromise our privacy and limit the sharing of our data. We invite interventions that begin at any stage of the video-calling process – from analogue obfuscation, to software manipulation or camera trickery.

Selected artists/collectives will receive a £1000 commission to take part and contribute in three workshops, in order to design and produce one or more, individual or collaborative, creative interventions developed from the workshops. These will include both technical support from a creative technologist as well as a curator for dissemination both online and in Edinburgh and London.

If you are an artist / technologist interested in disrupting/subverting the pandemic-inspired digital status quo, please send expressions of interest of no more than 500 words to pip.thornton@ed.ac.uk , andrew.dwyer@bristol.ac.uk, celsden@ed.ac.uk and michael.duggan@kcl.ac.uk by 8th October 2020. We don’t expect fully formed projects (these will come in the workshop sessions), but please indicate any broad ideas and thoughts you have, and highlight how your past and present practice might be a good fit for the project and its aims.

The Zoom Obscura project is in collaboration with Tinderbox Lab in Edinburgh and Hannah Redler-Hawes (independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London). Outputs from the project will be hosted and exhibited via the Data as Culture archive site and at a Creative Informatics event at the University of Edinburgh.

Are folks outside the UK eligible?

I asked Dr. Pip Thornton about eligibility and she kindly noted this in her Sept. 25, 2020 tweet (reply copied from my Twitter feed),

Open to all, but workshop timings may be more amenable to UK working hours. Having said that, we won’t know what the critical mass is until we review all the applications, so please do apply if you’re interested!

Who are the members of the Zoom Obscura project team?

From the Zoom Obscura webpage (on the creativeinformatics.org website),

Dr. Pip Thornton is a post-doctoral research associate in Creative Informatics at the University of Edinburgh, having recently gained her PhD in Geopolitics and Cybersecurity from Royal Holloway, University of London. Her thesis, Language in the Age of Algorithmic Reproduction: A Critique of Linguistic Capitalism, included theoretical, political and artistic critiques of Google’s search and advertising platforms. She has presented in a variety of venues including the Science Museum, the Alan Turing Institute and transmediale. Her work has featured in WIRED UK and New Scientist, and a collection from her {poem}.py intervention has been displayed at Open Data Institute in London. Her Edinburgh Futures Institute (EFI) funded installation Newspeak 2019, shown at the Edinburgh Festival Fringe (2019), was recently awarded an honourable mention in the Surveillance Studies Network biennial art competition (2020) and is shortlisted for the 2020 Lumen Prize for art and technology in the AI category.

Dr. Andrew Dwyer is a research associate  in the University of Bristol’s Cyber Security Group. Andrew gained a DPhil in Cyber Security at the University of Oxford, where he studied and questioned the role of malware – commonly known as computational viruses and worms –  through its analysis, detection, and translation into international politics and its intersection with multiple ecologies. In his doctoral thesis – Malware Ecologies: A Politics of Cybersecurity – he argued for a re-evaluation of the role of computational actors in the production and negotiation of security, and what this means for human-centred notions of weapons and warfare. Previously, Andrew has been a visiting fellow at the German ‘Dynamics of Security’ collaborative research centre based between Philipps-Universität Marburg, Justus-Liebig-Universität Gießen and the Herder Institute, Marburg and is a Research Affiliate at the Centre for Technology and Global Affairs at the University of Oxford. He will soon be starting a 3-year Addison Wheeler research fellowship in the Department of Geography at the Durham University

Dr Chris Elsden is a research associate in Design Informatics at the University of Edinburgh. Chris is primarily working on the AHRC Creative Informatics project., with specific interests in FinTech and livestreaming within the Creative Industries. He is an HCI researcher, with a background in sociology, and expertise in the human experience of a data-driven life. Using and developing innovative design research methods, his work undertakes diverse, qualitative and often speculative engagements with participants to investigate emerging relationships with technology – particularly data-driven tools and financialn technologies. Chris gained his PhD in Computer Science at Open Lab, Newcastle University in 2018, and in 2019 was a recipient of a SIGCHI Outstanding Dissertation Award.

Dr Mike Duggan is a Teaching Fellow in Digital Cultures in the Department of Digital Humanities at Kings College London. He was awarded a PhD in Cultural Geography from Royal Holloway University of London in 2017, which examined everyday digital mapping practices. This project was co-funded by the Ordnance Survey and the EPSRC. He is a member of the Living Maps network, where he is an editor for the ‘navigations’ section and previously curated the seminar series. Mike’s research is broadly interested in the digital and cultural geographies that emerge from the intersections between everyday life and digital technology.

Professor Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. Chris has sustained a critical enquiry into how network technology can engage with the fields of art, design and social experience through a variety of international digital art exhibitions, funded research projects, books journals and conferences. At present Chris is working on funded projects that engage with the social opportunities of crypto-currencies, an internet of toilet roll holders, and a persistent argument that chickens are actually robots.  Chris is co-editor of the journal Ubiquity and co-directs the Design Informatics Research Centre that is home to a combination of researchers working across the fields of interaction design, temporal design, anthropology, software engineering and digital architecture, as well as the PhD, MA/MFA and MSc and Advanced MSc programmes.

David Chatting is a designer and technologist who works in software and hardware to explore the impact of emerging technologies in everyday lives. He is currently a PhD student in the Department of Design at Goldsmiths – University of London, a Visiting Researcher at Newcastle University’s Open Lab and has his own design practice. Previously he was a Senior Researcher at BTs Broadband Applications Research Centre. David has a Masters degree in Design Interactions from the Royal College of Art (2012) and a Bachelors degree in Computer Science from the University of Birmingham (2000). He has published papers and filed patents in the fields of HCI, psychology, tangible interfaces, computer vision and computer graphics.

Hannah Redler Hawes (Data as Culture) is an independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London. Hannah specialises in emerging artistic practice within the fields of art and science and technology, with an interest in participatory process. She has previously developed projects for museums, galleries, corporate contexts, digital space and the public realm including the  Institute of Physics, Tate Modern, The Lowry, Natural History Museum, FACT Liverpool, the Digital Catapult and Science Gallery London, and has provided specialist consultancy services to the Wellcome Collection, Discover South Kensington and the Horniman Museum. Hannah enjoys projects that redraw boundaries between different disciplines. Current research is around addiction, open data, networked culture and new forms of programming beyond the gallery.

Tinderbox Collective : From grass-roots youth work to award-winning music productions, Tinderbox is building a vibrant and eclectic community of young musicians and artists in Scotland. We have a number of programmes that cross over with each other and come together wherever possible.  They are open to children and young people aged 10 – 25, from complete beginners to young professionals and all levels in between. Tinderbox Lab is our digital arts programme and shared studio maker-space in Edinburgh that brings together artists across disciplines with an interest in digital media and interactive technologies. It is a new programme that started development in 2019, leading to projects and events such as Room to Play, a 10-week course for emerging artists led by Yann Seznec; various guest artist talks & workshops; digital arts exhibitions at the V&A Dundee & Edinburgh Festival of Sound; digital/electronics workshops design/development for children & young people; and research included as part of Electronic Visualisation and the Arts (EVA) London 2019 conference.

Jack Nissan (Tinderbox) is the founder and director of the Tinderbox Collective. In 2012/13, Jack took part in a fellowship programmed called International Creative Entrepreneurs and spent several months working with community activists and social enterprises in China, primarily with families and communities on the outskirts of Beijing with an organisation called Hua Dan. Following this, he set up a number of international exchanges and cross-cultural productions that formed the basis for Tinderbox’s Journey of a Thousand Wings programme, a project bringing together artists and community projects from different countries. He is also a co-director and founding member of Hidden Door, a volunteer-run multi-arts festival, and has won a number of awards for his work across creative and social enterprise sectors. He has been invited to take part in several steering committees and advisory roles, including for Creative Scotland’s new cross-cutting theme on Creative Learning and Artworks Scotland’s peer-networks for artists working in participatory settings. Previously, Jack worked as a researcher in psychology and ageing for the multidisciplinary MRC Centre for Cognitive Ageing and Cognitive Epidemiology, specialising in areas of neuropsychology and memory.

Luci Holland (Tinderbox) is a Scottish (Edinburgh-based) composer, sound artist and radio presenter who composes and produces music and audiovisual art for film, games and concert. As a games music composer Luci wrote the original dynamic/responsive music for Blazing Griffin‘s 2018 release Murderous Pursuits, and has composed and arranged for numerous video game music collaborations, such as orchestrating and producing an arrangement of Jessica Curry‘s Disappearing with label Materia Collective’s bespoke cover album Pattern: An Homage to Everybody’s Gone to the Rapture. Currently she has also been composing custom game music tracks for Skyrim mod Lordbound and a variety of other film and game music projects. Luci also builds and designs interactive sonic art installations for festivals and venues (Refraction (Cryptic), CITADEL (Hidden Door)); and in 2019 Luci joined new classical music station Scala Radio to present The Console, a weekly one-hour show dedicated to celebrating great music in games. Luci also works as a musical director and composer with the youth music charity Tinderbox Project on their Orchestra & Digital Arts programmes; classical music organisation Absolute Classics; and occasionally coordinates musical experiments and productions with her music-for-media band Mantra Sound.

Good luck to all who submit an expression of interest and good luck to Dr. Thornton (I see from her bio that she’s been shortlisted for the 2020 Lumen Prize).

Live music by teleportation? Catch up. It’s already happened.

Dr. Alexis Kirke first graced this blog about four years ago, in a July 8, 2016 posting titled, Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin.

Kirke now returns with a study showing how teleportation helped to create a live performance piece, from a July 2, 2020 news item on ScienceDaily,

Teleportation is most commonly the stuff of science fiction and, for many, would conjure up the immortal phrase “Beam me up, Scotty.”

However, a new study has described how its status in science fact could actually be employed as another, and perhaps unlikely, form of entertainment — live music.

Dr Alexis Kirke, Senior Research Fellow in the Interdisciplinary Centre for Computer Music Research at the University of Plymouth (UK), has for the first time shown that a human musician can communicate directly with a quantum computer via teleportation.

The result is a high-tech jamming session, through which a blend of live human and computer-generated sounds come together to create a unique performance piece.

A July 2, 2020 Plymouth University press release (also on EurekAlert), which originated the news item, offers more detail about this latest work along with some information about the 2016 performance and how it all provides insight into how quantum computing might function in the future,

Speaking about the study, published in the current issue of the Journal of New Music Research, Dr Kirke said: “The world is racing to build the first practical and powerful quantum computers, and whoever succeeds first will have a scientific and military advantage because of the extreme computing power of these machines. This research shows for the first time that this much-vaunted advantage can also be helpful in the world of making and performing music. No other work has shown this previously in the arts, and it demonstrates that quantum power is something everyone can appreciate and enjoy.”

Quantum teleportation is the ability to instantaneously transmit quantum information over vast distances, with scientists having previously used it to send information from Earth to an orbiting satellite over 870 miles away.

In the current study, Dr Kirke describes how he used a system called MIq (Multi-Agent Interactive qgMuse), in which an IBM quantum computer executes a methodology called Grover’s Algorithm.

Discovered by Lov Grover at Bell Labs in 1996, it was the second main quantum algorithm (after Shor’s algorithm) and gave a huge advantage over traditional computing.

In this instance, it allows the dynamic solving of musical logical rules which, for example, could prevent dissonance or keep to ¾ instead of common time.

It is significantly faster than any classical computer algorithm, and Dr Kirke said that speed was essential because there is actually no way to transmit quantum information other than through teleportation.

The result was that when played the theme from Game of Thrones on the piano, the computer – a 14-qubit machine housed at IBM in Melbourne – rapidly generated accompanying music that was transmitted back in response.

Dr Kirke, who in 2016 staged the first ever duet between a live singer and a quantum supercomputer, said: “At the moment there are limits to how complex a real-time computer jamming system can be. The number of musical rules that a human improviser knows intuitively would simply take a computer too long to solve to real-time music. Shortcuts have been invented to speed up this process in rule-based AI music, but using the quantum computer speed-up has not be tried before. So while teleportation cannot move information faster than the speed of light, if remote collaborators want to connect up their quantum computers – which they are using to increase the speed of their musical AIs – it is 100% necessary. Quantum information simply cannot be transmitted using normal digital transmission systems.”

Caption: Dr Alexis Kirke (right) and soprano Juliette Pochin during the first duet between a live singer and a quantum supercomputer. Credit: University of Plymouth

Here’s a link to and a citation for the latest research,

Testing a hybrid hardware quantum multi-agent system architecture that utilizes the quantum speed advantage for interactive computer music by Alexis Kirke. Journal of New Music Research Volume 49, 2020 – Issue 3 Pages 209-230 DOI: https://doi.org/10.1080/09298215.2020.1749672 Published online: 13 Apr 2020

This paper appears to be open access.

2020 The Universe in Verse livestream on April 25, 2020 from New York City

The Universe in Verse event (poetry, music, science, and more) has been held annually by Pioneer Works in New York City since 2017. (It’s hard to believe I haven’t covered this event in previous years but it seems that’s so.)

A ticketed event usually held in a venue, in 2020, The Universe in Verse is being held free as a livestreamed event. Here’s more from the event page on the Pioneer Works website,

A LETTER FROM THE CURATOR AND HOST:

Dear Pioneer Works community,

Since 2017, The Universe in Verse has been celebrating science and the natural world — the splendor, the wonder, the mystery of it — through poetry, that lovely backdoor to consciousness, bypassing our habitual barricades of thought and feeling to reveal reality afresh. And now here we are — “survivors of immeasurable events,” in the words of the astronomer and poet Rebecca Elson, “small, wet miracles without instruction, only the imperative of change” — suddenly scattered six feet apart across a changed world, blinking with disorientation, disbelief, and no small measure of heartache. All around us, nature stands as a selective laboratory log of only the successes in the series of experiments we call evolution — every creature alive today, from the blooming magnolias to the pathogen-carrying bat, is alive because its progenitors have survived myriad cataclysms, adapted to myriad unforeseen challenges, learned to live in unimagined worlds.

The 2020 Universe in Verse is an adaptation, an experiment, a Promethean campfire for the collective imagination, taking a virtual leap to serve what it has always aspired to serve — a broadening of perspective: cosmic, creaturely, temporal, scientific, humanistic — all the more vital as we find the aperture of our attention and anxiety so contracted by the acute suffering of this shared present. Livestreaming from Pioneer Works at 4:30PM EST on Saturday, April 25, there will be readings of Walt Whitman, Emily Dickinson, Adrienne Rich, Pablo Neruda, June Jordan, Mary Oliver, Audre Lorde, Wendell Berry, Hafiz, Rachel Carson, James Baldwin, and other titans of poetic perspective, performed by a largehearted cast of scientists and artists, astronauts and poets, Nobel laureates and Grammy winners: Physicists Janna Levin, Kip Thorne, and Brian Greene, musicians Rosanne CashPatti SmithAmanda Palmer, Zoë Keating, Morley, and Cécile McLorin Salvant, poets Jane Hirshfield, Ross GayMarie Howe, and Natalie Diaz, astronomers Natalie Batalha and Jill Tarter, authors Rebecca Solnit, Elizabeth Gilbert, Masha Gessen, Roxane GayRobert Macfarlane, and Neil Gaiman, astronaut Leland Melvin, playwright and activist Eve Ensler, actor Natascha McElhone, entrepreneur Tim Ferriss, artists Debbie Millman, Dustin Yellin, and Lia Halloran, cartoonist Alison Bechdel, radio-enchanters Krista Tippett and Jad Abumrad, and composer Paola Prestini with the Young People’s Chorus. As always, there are some thrilling surprises in wait.

Every golden human thread weaving this global lifeline is donating their time and talent, diverting from their own work and livelihood, to offer this generous gift to the world. We’ve made this just because it feels important that it exist, that it serve some measure of consolation by calibration of perspective, perhaps even some joy. The Universe in Verse is ordinarily a ticketed charitable event, with all proceeds benefiting a chosen ecological or scientific-humanistic nonprofit each year. We offer this  year’s  livestream freely,  but making the show exist and beaming it to you had significant costs. If you are so moved and able, please support this colossal labor with a donation to Pioneer Works — our doors are now physically closed to the public, but our hearts remain open to the world as we pirouette to find new ways of serving art, science, and perspective. Your donation is tax-deductible and appreciation-additive.

Yours,

Maria Popova

For anyone unfamiliar with Pioneer Works, here’s more from their About page,

History

Pioneer Works is an artist-run cultural center that opened its doors to the public, free of charge, in 2012. Imagined by its founder, artist Dustin Yellin, as a place in which artists, scientists, and thinkers from various backgrounds converge, this “museum of process” takes its primary inspiration from utopian visionaries such as Buckminster Fuller, and radical institutions such as Black Mountain College.

The three-story red brick building that houses Pioneer Works was built in 1866 for what was then Pioneer Iron Works. The factory, which manufactured railroad tracks and other large-scale machinery, was a local landmark after which Pioneer Street was named. Devastated by fire in 1881, the building was rebuilt, and remained in active use through World War II. Dustin Yellin acquired the building in 2011, and renovated it with Gabriel Florenz, Pioneer Works’ Founding Artistic Director, and a team of talented artists, supporters, and advisors. Together, they established Pioneer Works as a 501c3 nonprofit in 2012.

Since its inception, Pioneer Works has built science studios, a technology lab with 3-D printing, a virtual environment lab for VR and AR production, a recording studio, a media lab for content creation and dissemination, a darkroom, residency studios, galleries, gardens, a ceramics studio, a press, and a bookshop. Pioneer Works’ central hall is home to a rotating schedule of exhibitions, science talks, music performances, workshops, and innovative free public programming.

The Universe in Verse’s curator and host, Maria Popova is best known for her blog. Here’s more from her Wikipedia entry (Note: Links have been removed),

Maria Popova (Bulgarian: Мария Попова; born 28 July 1984)[not verified in body] is a Bulgarian-born, American-based writer of literary and arts commentary and cultural criticism that has found wide appeal (as of 2012, 3 million page views and more than 1 million monthly readers),[needs update] both for its writing and for the visual stylistics that accompany it.[citation needed][needs update] She is most widely known for her blog, Brain Pickings [emphasis mine], an online publication that she has fought to maintain advertisement-free, which features her writing on books, and ideas from the arts, philosophy, culture, and other subjects. In addition to her writing and related speaking engagements, she has served as an MIT Futures of Entertainment Fellow,[when?] as the editorial director at the higher education social network Lore,[when?] and has written for The Atlantic, Wired UK, and other publications. As of 2012, she resided in Brooklyn, New York.[needs update]

There’s one more thing you might want to know about the event,

NOTE: For various artistic, legal, and technical reasons, the livestream will not be available in its entirety for later viewing, but individual readings will be released incrementally on Brain Pickings. As we are challenged to bend limitation into possibility as never before, may this meta-limitation too be an invitation— to be fully present, together across the space that divides us, for a beautiful and unrepeatable experience that animates a shared moment in time, all the more precious for being unrepeatable. “As if what exists, exists so that it can be lost and become precious,” in the words of the poet Lisel Mueller. 

Enjoy! And, if you can, please donate.

viral symphOny: an electronic soundwork à propos during a pandemic

Artist Joseph Nechvatal has a longstanding interest in viruses, i.e., computer viruses and that work seems strangely apt as we cope with the COVID-19 pandemic. He very kindly sent me some à propos information (received via an April 5, 2020 email),

I wanted to let you know that _viral symphOny_ (2006-2008), my 1 hour 40 minute collaborative electronic noise music symphony, created using custom artificial life C++ software based on the viral phenomenon model, is available to the world for free here:

https://archive.org/details/ViralSymphony

Before you click the link and dive in you might find these bits of information interesting. BTW, I do provide the link again at the end of this post.

Origin of and concept behind the term ‘computer virus’

As I’ve learned to expect, there are two and possibly more origin stories for the term ‘computer virus’. Refreshingly, there is near universal agreement in the material I’ve consulted about John von Neuman’s role as the originator of the concept. After that, it gets more complicated; Wikipedia credits a writer for christening the term (Note: Links have been removed),

The first academic work on the theory of self-replicating computer programs[17] was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the “Theory and Organization of Complicated Automata”. The work of von Neumann was later published as the “Theory of self-reproducing automata”. In his essay von Neumann described how a computer program could be designed to reproduce itself.[18] Von Neumann’s design for a self-reproducing computer program is considered the world’s first computer virus, and he is considered to be the theoretical “father” of computer virology.[19] In 1972, Veith Risak directly building on von Neumann’s work on self-replication, published his article “Selbstreproduzierende Automaten mit minimaler Informationsübertragung” (Self-reproducing automata with minimal information exchange).[20] The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980 Jürgen Kraus wrote his diplom thesis “Selbstreproduktion bei Programmen” (Self-reproduction of programs) at the University of Dortmund.[21] In his work Kraus postulated that computer programs can behave in a way similar to biological viruses.

Science fiction

The first known description of a self-reproducing program in a short story occurs in 1970 in The Scarred Man by Gregory Benford [emphasis mine] which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hit a modem that is answered by another computer. It then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE.[22]

The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner.[23]

The 1973 Michael Crichton sci-fi movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok.[24] Alan Oppenheimer’s character summarizes the problem by stating that “…there’s a clear pattern here which suggests an analogy to an infectious disease process, spreading from one…area to the next.” To which the replies are stated: “Perhaps there are superficial similarities to disease” and, “I must confess I find it difficult to believe in a disease of machinery.”[25]

Scientific American has an October 19, 2001 article citing four different experts’ answer to the question “When did the term ‘computer virus’ arise?” Three of the experts cite academics as the source for the term (usually Fred Cohen). One of the experts does mention writers (for the most part, not the same writers cited in the Wikipedia entry quotation in the above).

One expert discusses the concept behind the term and confirms what most people will suspect. Interestingly, this expert’s origin story varies somewhat from the other three.

Computer virus concept

From “When did the term ‘computer virus’ arise?” (Joseph Motola response),

The concept behind the first malicious computer programs was described years ago in the Computer Recreations column of Scientific American. The metaphor of the “computer virus” was adopted because of the similarity in form, function and consequence with biological viruses that attack the human system. Computer viruses can insert themselves in another program, taking over control or adversely affecting the function of the program.

Like their biological counterparts, computer viruses can spread rapidly and self-replicate systematically. They also mimic living viruses in the way they must adapt through mutation [emphases mine] to the development of resistance within a system: the author of a computer virus must upgrade his creation in order to overcome the resistance (antiviral programs) or to take advantage of new weakness or loophole within the system.

Computer viruses also act like biologics [emphasis mine] in the way they can be set off: they can be virulent from the outset of the infection, or they can be activated by a specific event (logic bomb). But computer viruses can also be triggered at a specific time (time bomb). Most viruses act innocuous towards a system until their specific condition is met.

The computer industry has expanded the metaphor to now include terms like inoculation, disinfection, quarantine and sanitation [emphases mine]. Now if your system gets infected by a computer virus you can quarantine it until you can call the “virus doctor” who can direct you to the appropriate “virus clinic” where your system can be inoculated and disinfected and an anti-virus program can be prescribed.

More about Joseph Nechvatal and his work on viruses

The similarities between computer and biological viruses are striking and with that in mind, here’s a clip featuring part of viral symphOny,

Before giving you a second link to Nechvatal’s entire viral symphOny, here’s some context about him and his work, from the Joseph Nechvatal Wikipedia entry, (Note: Links have been removed),

He began using computers to make “paintings” in 1986 [11] and later, in his signature work, began to employ computer viruses. These “collaborations” with viral systems positioned his work as an early contribution to what is increasingly referred to as a post-human aesthetic.[12][13]

From 1991–1993 he was artist-in-residence at the Louis Pasteur Atelier in Arbois, France and at the Saline Royale/Ledoux Foundation’s computer lab. There he worked on The Computer Virus Project, which was an artistic experiment with computer viruses and computer animation.[14] He exhibited at Documenta 8 in 1987.[15][16]

In 1999 Nechvatal obtained his Ph.D. in the philosophy of art and new technology concerning immersive virtual reality at Roy Ascott’s Centre for Advanced Inquiry in the Interactive Arts (CAiiA), University of Wales College, Newport, UK (now the Planetary Collegium at the University of Plymouth). There he developed his concept of viractualism, a conceptual art idea that strives “to create an interface between the biological and the technological.”[17] According to Nechvatal, this is a new topological space.[18]

In 2002 he extended his experimentation into viral artificial life through a collaboration with the programmer Stephane Sikora of music2eye in a work called the Computer Virus Project II,[19] inspired by the a-life work of John Horton Conway (particularly Conway’s Game of Life), by the general cellular automata work of John von Neumann, by the genetic programming algorithms of John Koza and the auto-destructive art of Gustav Metzger.[20]

In 2005 he exhibited Computer Virus Project II works (digital paintings, digital prints, a digital audio installation and two live electronic virus-attack art installations)[21] in a solo show called cOntaminatiOns at Château de Linardié in Senouillac, France. In 2006 Nechvatal received a retrospective exhibition entitled Contaminations at the Butler Institute of American Art’s Beecher Center for Arts and Technology.[4]

Dr. Nechvatal has also contributed to digital audio work with his noise music viral symphOny [emphasis mine], a collaborative sound symphony created by using his computer virus software at the Institute for Electronic Arts at Alfred University.[22][23] viral symphOny was presented as a part of nOise anusmOs in New York in 2012.[24]

Here’s a link to the complete viral symphOny with his website here and his blog here.

ETA April 7, 2020 at 1135 PT: Joseph Nechvatal’s book review of Gustav Metzger’s collected writings (1953–2016) has just (April 2020) dropped at The Brooklyn Rail here:  https://brooklynrail.org/2020/04/art_books/Gustav-Metzgers-Writings.