Category Archives: Music

Toronto’s ArtSci Salon and its Kaleidoscopic Imaginations on Oct 27, 2020 – 7:30 pm (EDT)

The ArtSci Salon is getting quite active these days. Here’s the latest from an Oct. 22, 2020 ArtSci Salon announcement (received via email), which can also be viewed on their Kaleidoscope event page,

Kaleidoscopic Imaginations

Performing togetherness in empty spaces

An experimental  collaboration between the ArtSci Salon, the Digital Dramaturgy Lab_squared/ DDL2 and Sensorium: Centre for Digital Arts and Technology, York University (Toronto, Ontario, Canada)

Tuesday, October 27, 2020

7:30 pm [EDT]

Join our evening of live-streamed, multi-media  performances, following a kaleidoscopic dramaturgy of complexity discourses as inspired by computational complexity theory gatherings.

We are presenting installations, site-specific artistic interventions and media experiments, featuring networked audio and video, dance and performances as we repopulate spaces – The Fields Institute and surroundings – forced to lie empty due to the pandemic. Respecting physical distance and new sanitation and safety rules can be challenging, but it can also open up new ideas and opportunities.

NOTE: DDL2  contributions to this event are sourced or inspired by their recent kaleidoscopic performance “Rattling the the Curve – Paradoxical ECODATA performances of A/I (artistic intelligence), and facial recognition of humans and trees

Virtual space/live streaming concept and design: DDL2  Antje Budde, Karyn McCallum and Don Sinclair

Virtual space and streaming pilot: Don Sinclair

Here are specific programme details (from the announcement),

  1. Signing the Virus – Video (2 min.)
    Collaborators: DDL2 Antje Budde, Felipe Cervera, Grace Whiskin
  2. Niimi II – – Performance and outdoor video projection (15 min.)
    (Nimii means in Anishinaabemowin: s/he dances) Collaborators: DDL2 Candy Blair, Antje Budde, Jill Carter, Lars Crosby, Nina Czegledy, Dave Kemp
  3. Oracle Jane (Scene 2) – A partial playreading on the politics of AI (30 min.)
    Playwright: DDL2 Oracle Collaborators: DDL2 Antje Budde, Frans Robinow, George Bwannika Seremba, Amy Wong and AI ethics consultant Vicki Zhang
  4. Vriksha/Tree – Dance video and outdoor projection (8 min.)
    Collaborators: DDL2 Antje Budde, Lars Crosby, Astad Deboo, Dave Kemp, Amit Kumar
  5. Facial Recognition – Performing a Plate Camera from a Distance (3 min.)
    Collaborators: DDL2 Antje Budde, Jill Carter, Felipe Cervera, Nina Czegledy, Karyn McCallum, Lars Crosby, Martin Kulinna, Montgomery C. Martin, George Bwanika Seremba, Don Sinclair, Heike Sommer
  6. Cutting Edge – Growing Data (6 min.)
    DDL2 A performance by Antje Budde
  7. “void * ambience” – Architectural and instrumental acoustics, projection mapping Concept: Sensorium: The Centre for Digital Art and Technology, York University Collaborators: Michael Palumbo, Ilze Briede [Kavi], Debashis Sinha, Joel Ong

This performance is part of a series (from the announcement),

These three performances are part of Boundary-Crossings: Multiscalar Entanglements in Art, Science and Society, a public Outreach program supported by the Fiends [sic] Institute for Research in Mathematical Science. Boundary Crossings is a series exploring how the notion of boundaries can be transcended and dissolved in the arts and the humanities, the biological and the mathematical sciences, as well as human geography and political economy. Boundaries are used to establish delimitations among disciplines; to discriminate between the human and the non-human (body and technologies, body and bacteria); and to indicate physical and/or artificial boundaries, separating geographical areas and nation states. Our goal is to cross these boundaries by proposing new narratives to show how the distinctions, and the barriers that science, technology, society and the state have created can in fact be re-interpreted as porous and woven together.

This event is curated and produced by ArtSci Salon; Digital Dramaturgy Lab_squared/ DDL2; Sensorium: Centre for Digital Arts and Technology, York University; and Ryerson University; it is supported by The Fields Institute for Research in Mathematical Sciences

Streaming Link 

Finally, the announcement includes biographical information about all of the ‘boundary-crossers’,

Candy Blair (Tkaron:to/Toronto)
Candy Blair/Otsίkh:èta (they/them) is a mixed First Nations/European,
2-spirit interdisciplinary visual and performing artist from Tio’tía:ke – where the group split (“Montreal”) in Québec.

While continuing their work as an artist they also finished their Creative Arts, Literature, and Languages program at Marianopolis College (cégep), their 1st year in the Theatre program at York University, and their 3rd year Acting Conservatory Program at the Centre For Indigenous Theatre in Tsí Tkaròn:to – Where the trees stand in water (Toronto”).

Some of Candy’s noteable performances are Jill Carter’s Encounters at the Edge of the Woods, exploring a range of issues with colonization; Ange Loft’s project Talking Treaties, discussing the treaties of the “Toronto” purchase; Cheri Maracle’s The Story of Six Nations, exploring Six Nation’s origin story through dance/combat choreography, and several other performances, exploring various topics around Indigenous language, land, and cultural restoration through various mediums such as dance,
modelling, painting, theatre, directing, song, etc. As an activist and soon to be entrepreneur, Candy also enjoys teaching workshops around promoting Indigenous resurgence such as Indigenous hand drumming, food sovereignty, beading, medicine knowledge, etc..

Working with their collectives like Weave and Mend, they were responsible for the design, land purification, and installation process of the four medicine plots and a community space with their 3 other members. Candy aspires to continue exploring ways of decolonization through healthy traditional practices from their mixed background and the arts in the hopes of eventually supporting Indigenous relations
worldwide.

Antje Budde
Antje Budde is a conceptual, queer-feminist, interdisciplinary experimental scholar-artist and an Associate Professor of Theatre Studies, Cultural Communication and Modern Chinese Studies at the Centre for Drama, Theatre and Performance Studies, University of Toronto. Antje has created multi-disciplinary artistic works in Germany, China and Canada and works tri-lingually in German, English and Mandarin. She is the founder of a number of queerly feminist performing art projects including most recently the (DDL)2 or (Digital Dramaturgy Lab)Squared – a platform for experimental explorations of digital culture, creative labor, integration of arts and science, and technology in performance. She is interested in the intersections of natural sciences, the arts, engineering and computer science.

Roberta Buiani
Roberta Buiani (MA; PhD York University) is the Artistic Director of the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences (Toronto). Her artistic work has travelled to art festivals (Transmediale; Hemispheric Institute Encuentro; Brazil), community centres and galleries (the Free Gallery Toronto; Immigrant Movement
International, Queens, Myseum of Toronto), and science institutions (RPI; the Fields Institute). Her writing has appeared on Space and Culture, Cultural Studies and The Canadian Journal of Communication_among others. With the ArtSci Salon she has launched a series of experiments in “squatting academia”, by re-populating abandoned spaces and cabinets across university campuses with SciArt installations.

Currently, she is a research associate at the Centre for Feminist Research and a Scholar in Residence at Sensorium: Centre for Digital Arts and Technology at York University [Toronto, Ontario, Canada].

Jill Carter (Tkaron:to/ Toronto)
Jill (Anishinaabe/Ashkenazi) is a theatre practitioner and researcher, currently cross appointed to the Centre for Drama, Theatre and Performance Studies; the Transitional Year Programme; and Indigenous Studies at the University of Toronto. She works with many members of Tkaron:to’s Indigenous theatre community to support the development of new works and to disseminate artistic objectives, process, and outcomes through community- driven research projects. Her scholarly research,
creative projects, and activism are built upon ongoing relationships with Indigenous Elders, Artists and Activists, positioning her as witness to, participant in, and disseminator of oral histories that speak to the application of Indigenous aesthetic principles and traditional knowledge systems to contemporary performance.The research questions she pursues revolve around the mechanics of story creation,
the processes of delivery and the manufacture of affect.

More recently, she has concentrated upon Indigenous pedagogical models for the rehearsal studio and the lecture hall; the application of Indigenous [insurgent] research methods within performance studies; the politics of land acknowledgements; and land – based dramaturgies/activations/interventions.

Jill also works as a researcher and tour guide with First Story Toronto; facilitates Land Acknowledgement, Devising, and Land-based Dramaturgy Workshops for theatre makers in this city; and performs with the Talking Treaties Collective (Jumblies Theatre, Toronto).

In September 2019, Jill directed Encounters at the Edge of the Woods. This was a devised show, featuring Indigenous and Settler voices, and it opened Hart House Theatre’s 100th season; it is the first instance of Indigenous presence on Hart House Theatre’s stage in its 100 years of existence as the cradle for Canadian theatre.

Nina Czegledy
(Toronto) artist, curator, educator, works internationally on collaborative art, science & technology projects. The changing perception of the human body and its environment as well as paradigm shifts in the arts inform her projects. She has exhibited and published widely, won awards for her artwork and has initiated, lead and participated in workshops, forums and festivals worldwide at international events.

Astad Deboo (Mumbai, India)
Astad Deboo is a contemporary dancer and choreographer who employs his
training in Indian classical dance forms of Kathak as well as Kathakali to create a dance form that is unique to him. He has become a pioneer of modern dance in India. Astad describes his style as “contemporary in vocabulary and traditional in restraints.” Throughout his long and illustrious career, he has worked with various prominent performers such as Pina Bausch, Alis on Becker Chase and Pink Floyd and performed in many parts of the world. He has been awarded the Sangeet Natak Akademi Award (1996) and Padma Shri (2007), awarded by the Government of India. In January 2005 along with 12 young women with hearing impairment supported by the Astad Deboo Dance Foundation, he performed at the 20th Annual Deaf Olympics at Melbourne, Australia. Astad has a long record of working with disadvantaged youth.

Ilze Briede [Kavi]
Ilze Briede [artist name: Kavi] is a Latvian/Canadian artist and researcher with broad and diverse interests. Her artistic practice, a hybrid of video, image and object making, investigates the phenomenon of perception and the constraints and boundaries between the senses and knowing. Kavi is currently pursuing a PhD degree in Digital Media at York University with a research focus on computational creativity and generative art. She sees computer-generated systems and algorithms as a potentiality for co-creation and collaboration between human and machine. Kavi has previously worked and exhibited with Fashion Art Toronto, Kensington Market Art Fair, Toronto Burlesque Festival, Nuit Blanche, Sidewalk Toronto and the Toronto Symphony Orchestra.

Dave Kemp
Dave Kemp is a visual artist whose practice looks at the intersections and interactions between art, science and technology: particularly at how these fields shape our perception and understanding of the world. His artworks have been exhibited widely at venues such as at the McIntosh Gallery, The Agnes Etherington Art Centre, Art Gallery of Mississauga, The Ontario Science Centre, York Quay Gallery, Interaccess,
Modern Fuel Artist-Run Centre, and as part of the Switch video festival in Nenagh, Ireland. His works are also included in the permanent collections of the Agnes Etherington Art Centre and the Canada Council Art Bank.

Stephen Morris
Stephen Morris is Professor of experimental non-linear Physics in the faculty of Physics at the University of Toronto. He is the scientific Director of the ArtSci salon at the Fields Institute for Research in Mathematical Sciences. He often collaborates with artists and has himself performed and produced art involving his own scientific instruments and experiments in non-linear physics and pattern formation

Michael Palumbo
Michael Palumbo (MA, BFA) is an electroacoustic music improviser, coder, and researcher. His PhD research spans distributed creativity and version control systems, and is expressed through “git show”, a distributed electroacoustic music composition and design experiment, and “Mischmasch”, a collaborative modular synthesizer in virtual reality. He studies with Dr. Doug Van Nort as a researcher in the Distributed
Performance and Sensorial Immersion Lab, and Dr. Graham Wakefield at the Alice Lab for Computational Worldmaking. His works have been presented internationally, including at ISEA, AES, NIME, Expo ’74, TIES, and the Network Music Festival. He performs regularly with a modular synthesizer, runs the Exit Points electroacoustic improvisation series, and is an enthusiastic gardener and yoga practitioner.

Joel Ong (PhD. Digital Arts and Experimental Media (DXARTS, University
of Washington)

Joel Ong is a media artist whose works connect scientific and artistic approaches to the environment, particularly with respect to sound and physical space.  Professor Ong’s work explores the way objects and spaces can function as repositories of ‘frozen sound’, and in elucidating these, he is interested in creating what systems theorist Jack Burnham (1968) refers to as “art (that) does not reside in material entities, but in relations between people and between people and the components of their environment”.

A serial collaborator, Professor Ong is invested in the broader scope of Art-Science collaborations and is engaged constantly in the discourses and processes that facilitate viewing these two polemical disciplines on similar ground.  His graduate interdisciplinary work in nanotechnology and sound was conducted at SymbioticA, the Center of Excellence for Biological Arts at the University of Western Australia and supervised by BioArt pioneers and TCA (The Tissue Culture and Art Project) artists Dr Ionat Zurr and Oron Catts.

George Bwanika Seremba
George Bwanika Seremba,is an actor, playwright and scholar. He was born
in Uganda. George holds an M. Phil, and a Ph.D. in Theatre Studies, from Trinity
College Dublin. In 1980, having barely survived a botched execution by the Military Intelligence, he fled into exile, resettling in Canada (1983). He has performed in numerous plays including in his own, “Come Good Rain”, which was awarded a Dora award (1993). In addition, he published a number of edited play collections including “Beyond the pale: dramatic writing from First Nations writers & writers of colour” co-edited by Yvette Nolan, Betty Quan, George Bwanika Seremba. (1996).

George was nominated for the Irish Times’ Best Actor award in Dublin’s Calypso Theatre’s for his role in Athol Fugard’s “Master Harold and the boys”. In addition to theatre he performed in several movies and on television. His doctoral thesis (2008) entitled “Robert Serumaga and the Golden Age of Uganda’s Theatre (1968-1978): (Solipsism, Activism, Innovation)” will be published as a monograph by CSP (U.K) in 2021.

Don Sinclair (Toronto)
Don is Associate Professor in the Department of Computational Arts at York University. His creative research areas include interactive performance, projections for dance, sound art, web and data art, cycling art, sustainability, and choral singing most often using code and programming. Don is particularly interested in processes of artistic creation that integrate digital creative coding-based practices with performance in dance and theatre. As well, he is an enthusiastic cyclist.

Debashis Sinha
Driven by a deep commitment to the primacy of sound in creative expression, Debashis Sinha has realized projects in radiophonic art, music, sound art, audiovisual performance, theatre, dance, and music across Canada and internationally. Sound design and composition credits include numerous works for Peggy Baker Dance Projects and productions with Canada’s premiere theatre companies including The Stratford Festival, Soulpepper, Volcano Theatre, Young People’s Theatre, Project Humanity, The Theatre Centre, Nightwood Theatre, Why Not Theatre, MTC Warehouse and Necessary Angel. His live sound practice on the concert stage has led to appearances at MUTEK Montreal, MUTEK Japan, the Guelph Jazz Festival, the Banff Centre, The Music Gallery, and other venues. Sinha teaches sound design at York University and the National Theatre School, and is currently working on a multi-part audio/performance work incorporating machine learning and AI funded by the Canada Council for the Arts.

Vicki (Jingjing) Zhang (Toronto)
Vicki Zhang is a faculty member at University of Toronto’s statistics department. She is the author of Uncalculated Risks (Canadian Scholar’s Press, 2014). She is also a playwright, whose plays have been produced or stage read in various festivals and venues in Canada including Toronto’s New Ideas Festival, Winnipeg’s FemFest, Hamilton Fringe Festival, Ergo Pink Fest, InspiraTO festival, Toronto’s Festival of Original Theatre (FOOT), Asper Center for Theatre and Film, Canadian Museum for Human Rights, Cultural Pluralism in the Arts Movement Ontario (CPAMO), and the Canadian Play Thing. She has also written essays and short fiction for Rookie Magazine and Thread.

If you can’t attend this Oct. 27, 2020 event, there’s still the Oct. 29, 2020 Boundary-Crossings event: Beauty Kit (see my Oct. 12, 2020 posting for more).

As for Kaleidoscopic Imaginations, you can access the Streaming Link On Oct. 27, 2020 at 7:30 pm EDT (4 pm PDT).

Music for Incandescent Events: Skyview, Here (version 4) 25 October – 31 October 2020

This October 20, 2020 notice from Toronto’s ArtSci Salon (received via email) features a DIY musical event for dawn and dusk from Oct. 25 – 31, 2020 and it is a Canada-wide event series,

Dear media-arts and music organizations, arts educators & adventurous
radio programmers, kindly distribute this invitation to your members,
students, audiences and colleagues.

You’re invited to a free week-long dawn & dusk audio-viewing event
at a location of your choice:

Sunday October 25 – Saturday October 31

_ Music for Incandescent Events : Skyview, Here (version 4)_
Audio for skyscapes around sunset and sunrise. Livestream &
downloadable for portable sky-viewing adventures_ (variable times).
_
_ By Sarah Peebles. _
Presented by the Canadian Music Centre Ontario Chapter.

Day 1 event page, information & schedule overview – CMC

https://on.cmccanada.org/event/music-for-incandescent-events/

CMC Calendar with day by day links to each day’s event page

https://on.cmccanada.org/events/

Special thanks to CMC – Ontario & Mattew Fava for presenting and hosting this installation.

I hope you enjoy the experience!

I found more information on the event, which clarifies how people in Ontario and how people in the rest of Canada can participate in the Canadian Music Centre’s latest Incandescent Event,

South-Western Ontario, online | We invite audiences to tune-in during scheduled audio streams on Facebook during the week of October 25 occurring roughly at dawn and dusk for those based in South-Western Ontario. Streaming will coincide with the shifting light around sunrise/sunset within a broad zone ranging approximately from Peterborough in the East, to London in the West, and Barrie in the North. Tune in as the sky begins to change colour.

Across Canada, offline | Audiences are also invited to download the dawn-dusk audio files for this work in order to listen offline at a self-directed time based on their location. We encourage you to creatively locate yourself off-line via bicycle, ferry, boat, walking, or driving with your portable listening device.

Instructions for experiencing the pieces | Place yourself with a skyscape view of your choice, indoors or outdoors. Adjust your soundscape to your liking (e.g. open windows, sit under a tree, near waves or find a reflective surface, etc.). Listen online live or via audiofile (download) during sunset and/or sunrise, using good quality loudspeakers, headphones or earbuds.

About the Piece |Music for Incandescent Events meditates our perception of time, memory and place, creating a space for contemplation, for awareness of one’s physical environment, and for exploration of consciousness in the moment.

Each iteration of Incandescent Events combines different improvised short melodies and tones performed on a slightly de-tuned shô (Japanese mouth-organ), re-recorded several octaves lower than original pitch. I recorded these melodies at very close range while sitting near a reflective wall, catching rich beat patterns and sum/difference tones. These and additional frequencies beyond the range of human hearing transform into unexpected, complex audio events at slower play-back speeds, several octaves down.

Music for Incandescent Events version 1 (2002) is published on Somethings #1 (Last Visible Dog). Version 2 (installation) was commissioned by wade Collective for wade project in June, 2004, for Gibraltar Point, Toronto Islands; it showed at the McLuhan International Festival of the Future, “Scanning Nature” exhibition October, 2004 (DeLeon White Gallery rooftop deck. Version no. 3, Colour Temperature Event (for Gabriola Island, Berry Point, 2017), was curated for The QR Anthology

You can find the schedule for the streaming events (Oct. 25 -31, 2020) and a link to the music downloads at the Canadian Music Centre’s Music for Incandescent Events: Skyview, Here (version 4) event page.

Concerns about Zoom? Call for expressions of interest in “Zoom Obscura,” creative interventions for a data ethics of video conferencing

Have you wondered about Zoom video conferencing and all that data being made available? Perhaps questioned ethical issues in addition to those associated with data security? Is so and you’d like to come up with a creative intervention that delves beyond encryption issues, there’s Zoom Obscura (on the creativeinformatics.org website),

CI [Creative Informatics] researchers Pip Thornton, Chris Elsden and Chris Speed were recently awarded funding from the Human Data Interaction Network (HDI +) Ethics & Data competition. Collaborating with researchers from Durham [Durham University] and KCL [Kings College London], the Zoom Obscura project aims to investigate creative interventions for a data ethics of video conferencing beyond encryption.

The COVID-19 pandemic has gifted video conferencing companies, such as Zoom, with a vast amount of economically valuable and sensitive data such as our facial and voice biometrics, backgrounds and chat scripts. Before the pandemic, this ‘new normal’ would be subject to scrutiny, scepticism and critique. Yet, the urgent need for remote working and socialising left us with little choice but to engage with these potentially exploitative platforms.

While much of the narrative around data security revolves around technological ‘solutions’ such as encryption, we think there are other – more creative – ways to push back against the systems of digital capitalism that continue to encroach on our everyday lives.

As part of this HDI-funded project, we seek artists, hackers and creative technologists who are interested in experimenting with creative methods to join us in a series of online workshops that will explore how to restore some control and agency in how we can be seen and heard in these newly ubiquitous online spaces. Through three half-day workshops held remotely, we will bring artists and technicians together to ideate, prototype, and exhibit various interventions into the rapidly normalising culture of video-calling in ways that do not compromise our privacy and limit the sharing of our data. We invite interventions that begin at any stage of the video-calling process – from analogue obfuscation, to software manipulation or camera trickery.

Selected artists/collectives will receive a £1000 commission to take part and contribute in three workshops, in order to design and produce one or more, individual or collaborative, creative interventions developed from the workshops. These will include both technical support from a creative technologist as well as a curator for dissemination both online and in Edinburgh and London.

If you are an artist / technologist interested in disrupting/subverting the pandemic-inspired digital status quo, please send expressions of interest of no more than 500 words to pip.thornton@ed.ac.uk , andrew.dwyer@bristol.ac.uk, celsden@ed.ac.uk and michael.duggan@kcl.ac.uk by 8th October 2020. We don’t expect fully formed projects (these will come in the workshop sessions), but please indicate any broad ideas and thoughts you have, and highlight how your past and present practice might be a good fit for the project and its aims.

The Zoom Obscura project is in collaboration with Tinderbox Lab in Edinburgh and Hannah Redler-Hawes (independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London). Outputs from the project will be hosted and exhibited via the Data as Culture archive site and at a Creative Informatics event at the University of Edinburgh.

Are folks outside the UK eligible?

I asked Dr. Pip Thornton about eligibility and she kindly noted this in her Sept. 25, 2020 tweet (reply copied from my Twitter feed),

Open to all, but workshop timings may be more amenable to UK working hours. Having said that, we won’t know what the critical mass is until we review all the applications, so please do apply if you’re interested!

Who are the members of the Zoom Obscura project team?

From the Zoom Obscura webpage (on the creativeinformatics.org website),

Dr. Pip Thornton is a post-doctoral research associate in Creative Informatics at the University of Edinburgh, having recently gained her PhD in Geopolitics and Cybersecurity from Royal Holloway, University of London. Her thesis, Language in the Age of Algorithmic Reproduction: A Critique of Linguistic Capitalism, included theoretical, political and artistic critiques of Google’s search and advertising platforms. She has presented in a variety of venues including the Science Museum, the Alan Turing Institute and transmediale. Her work has featured in WIRED UK and New Scientist, and a collection from her {poem}.py intervention has been displayed at Open Data Institute in London. Her Edinburgh Futures Institute (EFI) funded installation Newspeak 2019, shown at the Edinburgh Festival Fringe (2019), was recently awarded an honourable mention in the Surveillance Studies Network biennial art competition (2020) and is shortlisted for the 2020 Lumen Prize for art and technology in the AI category.

Dr. Andrew Dwyer is a research associate  in the University of Bristol’s Cyber Security Group. Andrew gained a DPhil in Cyber Security at the University of Oxford, where he studied and questioned the role of malware – commonly known as computational viruses and worms –  through its analysis, detection, and translation into international politics and its intersection with multiple ecologies. In his doctoral thesis – Malware Ecologies: A Politics of Cybersecurity – he argued for a re-evaluation of the role of computational actors in the production and negotiation of security, and what this means for human-centred notions of weapons and warfare. Previously, Andrew has been a visiting fellow at the German ‘Dynamics of Security’ collaborative research centre based between Philipps-Universität Marburg, Justus-Liebig-Universität Gießen and the Herder Institute, Marburg and is a Research Affiliate at the Centre for Technology and Global Affairs at the University of Oxford. He will soon be starting a 3-year Addison Wheeler research fellowship in the Department of Geography at the Durham University

Dr Chris Elsden is a research associate in Design Informatics at the University of Edinburgh. Chris is primarily working on the AHRC Creative Informatics project., with specific interests in FinTech and livestreaming within the Creative Industries. He is an HCI researcher, with a background in sociology, and expertise in the human experience of a data-driven life. Using and developing innovative design research methods, his work undertakes diverse, qualitative and often speculative engagements with participants to investigate emerging relationships with technology – particularly data-driven tools and financialn technologies. Chris gained his PhD in Computer Science at Open Lab, Newcastle University in 2018, and in 2019 was a recipient of a SIGCHI Outstanding Dissertation Award.

Dr Mike Duggan is a Teaching Fellow in Digital Cultures in the Department of Digital Humanities at Kings College London. He was awarded a PhD in Cultural Geography from Royal Holloway University of London in 2017, which examined everyday digital mapping practices. This project was co-funded by the Ordnance Survey and the EPSRC. He is a member of the Living Maps network, where he is an editor for the ‘navigations’ section and previously curated the seminar series. Mike’s research is broadly interested in the digital and cultural geographies that emerge from the intersections between everyday life and digital technology.

Professor Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. Chris has sustained a critical enquiry into how network technology can engage with the fields of art, design and social experience through a variety of international digital art exhibitions, funded research projects, books journals and conferences. At present Chris is working on funded projects that engage with the social opportunities of crypto-currencies, an internet of toilet roll holders, and a persistent argument that chickens are actually robots.  Chris is co-editor of the journal Ubiquity and co-directs the Design Informatics Research Centre that is home to a combination of researchers working across the fields of interaction design, temporal design, anthropology, software engineering and digital architecture, as well as the PhD, MA/MFA and MSc and Advanced MSc programmes.

David Chatting is a designer and technologist who works in software and hardware to explore the impact of emerging technologies in everyday lives. He is currently a PhD student in the Department of Design at Goldsmiths – University of London, a Visiting Researcher at Newcastle University’s Open Lab and has his own design practice. Previously he was a Senior Researcher at BTs Broadband Applications Research Centre. David has a Masters degree in Design Interactions from the Royal College of Art (2012) and a Bachelors degree in Computer Science from the University of Birmingham (2000). He has published papers and filed patents in the fields of HCI, psychology, tangible interfaces, computer vision and computer graphics.

Hannah Redler Hawes (Data as Culture) is an independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London. Hannah specialises in emerging artistic practice within the fields of art and science and technology, with an interest in participatory process. She has previously developed projects for museums, galleries, corporate contexts, digital space and the public realm including the  Institute of Physics, Tate Modern, The Lowry, Natural History Museum, FACT Liverpool, the Digital Catapult and Science Gallery London, and has provided specialist consultancy services to the Wellcome Collection, Discover South Kensington and the Horniman Museum. Hannah enjoys projects that redraw boundaries between different disciplines. Current research is around addiction, open data, networked culture and new forms of programming beyond the gallery.

Tinderbox Collective : From grass-roots youth work to award-winning music productions, Tinderbox is building a vibrant and eclectic community of young musicians and artists in Scotland. We have a number of programmes that cross over with each other and come together wherever possible.  They are open to children and young people aged 10 – 25, from complete beginners to young professionals and all levels in between. Tinderbox Lab is our digital arts programme and shared studio maker-space in Edinburgh that brings together artists across disciplines with an interest in digital media and interactive technologies. It is a new programme that started development in 2019, leading to projects and events such as Room to Play, a 10-week course for emerging artists led by Yann Seznec; various guest artist talks & workshops; digital arts exhibitions at the V&A Dundee & Edinburgh Festival of Sound; digital/electronics workshops design/development for children & young people; and research included as part of Electronic Visualisation and the Arts (EVA) London 2019 conference.

Jack Nissan (Tinderbox) is the founder and director of the Tinderbox Collective. In 2012/13, Jack took part in a fellowship programmed called International Creative Entrepreneurs and spent several months working with community activists and social enterprises in China, primarily with families and communities on the outskirts of Beijing with an organisation called Hua Dan. Following this, he set up a number of international exchanges and cross-cultural productions that formed the basis for Tinderbox’s Journey of a Thousand Wings programme, a project bringing together artists and community projects from different countries. He is also a co-director and founding member of Hidden Door, a volunteer-run multi-arts festival, and has won a number of awards for his work across creative and social enterprise sectors. He has been invited to take part in several steering committees and advisory roles, including for Creative Scotland’s new cross-cutting theme on Creative Learning and Artworks Scotland’s peer-networks for artists working in participatory settings. Previously, Jack worked as a researcher in psychology and ageing for the multidisciplinary MRC Centre for Cognitive Ageing and Cognitive Epidemiology, specialising in areas of neuropsychology and memory.

Luci Holland (Tinderbox) is a Scottish (Edinburgh-based) composer, sound artist and radio presenter who composes and produces music and audiovisual art for film, games and concert. As a games music composer Luci wrote the original dynamic/responsive music for Blazing Griffin‘s 2018 release Murderous Pursuits, and has composed and arranged for numerous video game music collaborations, such as orchestrating and producing an arrangement of Jessica Curry‘s Disappearing with label Materia Collective’s bespoke cover album Pattern: An Homage to Everybody’s Gone to the Rapture. Currently she has also been composing custom game music tracks for Skyrim mod Lordbound and a variety of other film and game music projects. Luci also builds and designs interactive sonic art installations for festivals and venues (Refraction (Cryptic), CITADEL (Hidden Door)); and in 2019 Luci joined new classical music station Scala Radio to present The Console, a weekly one-hour show dedicated to celebrating great music in games. Luci also works as a musical director and composer with the youth music charity Tinderbox Project on their Orchestra & Digital Arts programmes; classical music organisation Absolute Classics; and occasionally coordinates musical experiments and productions with her music-for-media band Mantra Sound.

Good luck to all who submit an expression of interest and good luck to Dr. Thornton (I see from her bio that she’s been shortlisted for the 2020 Lumen Prize).

Live music by teleportation? Catch up. It’s already happened.

Dr. Alexis Kirke first graced this blog about four years ago, in a July 8, 2016 posting titled, Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin.

Kirke now returns with a study showing how teleportation helped to create a live performance piece, from a July 2, 2020 news item on ScienceDaily,

Teleportation is most commonly the stuff of science fiction and, for many, would conjure up the immortal phrase “Beam me up, Scotty.”

However, a new study has described how its status in science fact could actually be employed as another, and perhaps unlikely, form of entertainment — live music.

Dr Alexis Kirke, Senior Research Fellow in the Interdisciplinary Centre for Computer Music Research at the University of Plymouth (UK), has for the first time shown that a human musician can communicate directly with a quantum computer via teleportation.

The result is a high-tech jamming session, through which a blend of live human and computer-generated sounds come together to create a unique performance piece.

A July 2, 2020 Plymouth University press release (also on EurekAlert), which originated the news item, offers more detail about this latest work along with some information about the 2016 performance and how it all provides insight into how quantum computing might function in the future,

Speaking about the study, published in the current issue of the Journal of New Music Research, Dr Kirke said: “The world is racing to build the first practical and powerful quantum computers, and whoever succeeds first will have a scientific and military advantage because of the extreme computing power of these machines. This research shows for the first time that this much-vaunted advantage can also be helpful in the world of making and performing music. No other work has shown this previously in the arts, and it demonstrates that quantum power is something everyone can appreciate and enjoy.”

Quantum teleportation is the ability to instantaneously transmit quantum information over vast distances, with scientists having previously used it to send information from Earth to an orbiting satellite over 870 miles away.

In the current study, Dr Kirke describes how he used a system called MIq (Multi-Agent Interactive qgMuse), in which an IBM quantum computer executes a methodology called Grover’s Algorithm.

Discovered by Lov Grover at Bell Labs in 1996, it was the second main quantum algorithm (after Shor’s algorithm) and gave a huge advantage over traditional computing.

In this instance, it allows the dynamic solving of musical logical rules which, for example, could prevent dissonance or keep to ¾ instead of common time.

It is significantly faster than any classical computer algorithm, and Dr Kirke said that speed was essential because there is actually no way to transmit quantum information other than through teleportation.

The result was that when played the theme from Game of Thrones on the piano, the computer – a 14-qubit machine housed at IBM in Melbourne – rapidly generated accompanying music that was transmitted back in response.

Dr Kirke, who in 2016 staged the first ever duet between a live singer and a quantum supercomputer, said: “At the moment there are limits to how complex a real-time computer jamming system can be. The number of musical rules that a human improviser knows intuitively would simply take a computer too long to solve to real-time music. Shortcuts have been invented to speed up this process in rule-based AI music, but using the quantum computer speed-up has not be tried before. So while teleportation cannot move information faster than the speed of light, if remote collaborators want to connect up their quantum computers – which they are using to increase the speed of their musical AIs – it is 100% necessary. Quantum information simply cannot be transmitted using normal digital transmission systems.”

Caption: Dr Alexis Kirke (right) and soprano Juliette Pochin during the first duet between a live singer and a quantum supercomputer. Credit: University of Plymouth

Here’s a link to and a citation for the latest research,

Testing a hybrid hardware quantum multi-agent system architecture that utilizes the quantum speed advantage for interactive computer music by Alexis Kirke. Journal of New Music Research Volume 49, 2020 – Issue 3 Pages 209-230 DOI: https://doi.org/10.1080/09298215.2020.1749672 Published online: 13 Apr 2020

This paper appears to be open access.

2020 The Universe in Verse livestream on April 25, 2020 from New York City

The Universe in Verse event (poetry, music, science, and more) has been held annually by Pioneer Works in New York City since 2017. (It’s hard to believe I haven’t covered this event in previous years but it seems that’s so.)

A ticketed event usually held in a venue, in 2020, The Universe in Verse is being held free as a livestreamed event. Here’s more from the event page on the Pioneer Works website,

A LETTER FROM THE CURATOR AND HOST:

Dear Pioneer Works community,

Since 2017, The Universe in Verse has been celebrating science and the natural world — the splendor, the wonder, the mystery of it — through poetry, that lovely backdoor to consciousness, bypassing our habitual barricades of thought and feeling to reveal reality afresh. And now here we are — “survivors of immeasurable events,” in the words of the astronomer and poet Rebecca Elson, “small, wet miracles without instruction, only the imperative of change” — suddenly scattered six feet apart across a changed world, blinking with disorientation, disbelief, and no small measure of heartache. All around us, nature stands as a selective laboratory log of only the successes in the series of experiments we call evolution — every creature alive today, from the blooming magnolias to the pathogen-carrying bat, is alive because its progenitors have survived myriad cataclysms, adapted to myriad unforeseen challenges, learned to live in unimagined worlds.

The 2020 Universe in Verse is an adaptation, an experiment, a Promethean campfire for the collective imagination, taking a virtual leap to serve what it has always aspired to serve — a broadening of perspective: cosmic, creaturely, temporal, scientific, humanistic — all the more vital as we find the aperture of our attention and anxiety so contracted by the acute suffering of this shared present. Livestreaming from Pioneer Works at 4:30PM EST on Saturday, April 25, there will be readings of Walt Whitman, Emily Dickinson, Adrienne Rich, Pablo Neruda, June Jordan, Mary Oliver, Audre Lorde, Wendell Berry, Hafiz, Rachel Carson, James Baldwin, and other titans of poetic perspective, performed by a largehearted cast of scientists and artists, astronauts and poets, Nobel laureates and Grammy winners: Physicists Janna Levin, Kip Thorne, and Brian Greene, musicians Rosanne CashPatti SmithAmanda Palmer, Zoë Keating, Morley, and Cécile McLorin Salvant, poets Jane Hirshfield, Ross GayMarie Howe, and Natalie Diaz, astronomers Natalie Batalha and Jill Tarter, authors Rebecca Solnit, Elizabeth Gilbert, Masha Gessen, Roxane GayRobert Macfarlane, and Neil Gaiman, astronaut Leland Melvin, playwright and activist Eve Ensler, actor Natascha McElhone, entrepreneur Tim Ferriss, artists Debbie Millman, Dustin Yellin, and Lia Halloran, cartoonist Alison Bechdel, radio-enchanters Krista Tippett and Jad Abumrad, and composer Paola Prestini with the Young People’s Chorus. As always, there are some thrilling surprises in wait.

Every golden human thread weaving this global lifeline is donating their time and talent, diverting from their own work and livelihood, to offer this generous gift to the world. We’ve made this just because it feels important that it exist, that it serve some measure of consolation by calibration of perspective, perhaps even some joy. The Universe in Verse is ordinarily a ticketed charitable event, with all proceeds benefiting a chosen ecological or scientific-humanistic nonprofit each year. We offer this  year’s  livestream freely,  but making the show exist and beaming it to you had significant costs. If you are so moved and able, please support this colossal labor with a donation to Pioneer Works — our doors are now physically closed to the public, but our hearts remain open to the world as we pirouette to find new ways of serving art, science, and perspective. Your donation is tax-deductible and appreciation-additive.

Yours,

Maria Popova

For anyone unfamiliar with Pioneer Works, here’s more from their About page,

History

Pioneer Works is an artist-run cultural center that opened its doors to the public, free of charge, in 2012. Imagined by its founder, artist Dustin Yellin, as a place in which artists, scientists, and thinkers from various backgrounds converge, this “museum of process” takes its primary inspiration from utopian visionaries such as Buckminster Fuller, and radical institutions such as Black Mountain College.

The three-story red brick building that houses Pioneer Works was built in 1866 for what was then Pioneer Iron Works. The factory, which manufactured railroad tracks and other large-scale machinery, was a local landmark after which Pioneer Street was named. Devastated by fire in 1881, the building was rebuilt, and remained in active use through World War II. Dustin Yellin acquired the building in 2011, and renovated it with Gabriel Florenz, Pioneer Works’ Founding Artistic Director, and a team of talented artists, supporters, and advisors. Together, they established Pioneer Works as a 501c3 nonprofit in 2012.

Since its inception, Pioneer Works has built science studios, a technology lab with 3-D printing, a virtual environment lab for VR and AR production, a recording studio, a media lab for content creation and dissemination, a darkroom, residency studios, galleries, gardens, a ceramics studio, a press, and a bookshop. Pioneer Works’ central hall is home to a rotating schedule of exhibitions, science talks, music performances, workshops, and innovative free public programming.

The Universe in Verse’s curator and host, Maria Popova is best known for her blog. Here’s more from her Wikipedia entry (Note: Links have been removed),

Maria Popova (Bulgarian: Мария Попова; born 28 July 1984)[not verified in body] is a Bulgarian-born, American-based writer of literary and arts commentary and cultural criticism that has found wide appeal (as of 2012, 3 million page views and more than 1 million monthly readers),[needs update] both for its writing and for the visual stylistics that accompany it.[citation needed][needs update] She is most widely known for her blog, Brain Pickings [emphasis mine], an online publication that she has fought to maintain advertisement-free, which features her writing on books, and ideas from the arts, philosophy, culture, and other subjects. In addition to her writing and related speaking engagements, she has served as an MIT Futures of Entertainment Fellow,[when?] as the editorial director at the higher education social network Lore,[when?] and has written for The Atlantic, Wired UK, and other publications. As of 2012, she resided in Brooklyn, New York.[needs update]

There’s one more thing you might want to know about the event,

NOTE: For various artistic, legal, and technical reasons, the livestream will not be available in its entirety for later viewing, but individual readings will be released incrementally on Brain Pickings. As we are challenged to bend limitation into possibility as never before, may this meta-limitation too be an invitation— to be fully present, together across the space that divides us, for a beautiful and unrepeatable experience that animates a shared moment in time, all the more precious for being unrepeatable. “As if what exists, exists so that it can be lost and become precious,” in the words of the poet Lisel Mueller. 

Enjoy! And, if you can, please donate.

viral symphOny: an electronic soundwork à propos during a pandemic

Artist Joseph Nechvatal has a longstanding interest in viruses, i.e., computer viruses and that work seems strangely apt as we cope with the COVID-19 pandemic. He very kindly sent me some à propos information (received via an April 5, 2020 email),

I wanted to let you know that _viral symphOny_ (2006-2008), my 1 hour 40 minute collaborative electronic noise music symphony, created using custom artificial life C++ software based on the viral phenomenon model, is available to the world for free here:

https://archive.org/details/ViralSymphony

Before you click the link and dive in you might find these bits of information interesting. BTW, I do provide the link again at the end of this post.

Origin of and concept behind the term ‘computer virus’

As I’ve learned to expect, there are two and possibly more origin stories for the term ‘computer virus’. Refreshingly, there is near universal agreement in the material I’ve consulted about John von Neuman’s role as the originator of the concept. After that, it gets more complicated; Wikipedia credits a writer for christening the term (Note: Links have been removed),

The first academic work on the theory of self-replicating computer programs[17] was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the “Theory and Organization of Complicated Automata”. The work of von Neumann was later published as the “Theory of self-reproducing automata”. In his essay von Neumann described how a computer program could be designed to reproduce itself.[18] Von Neumann’s design for a self-reproducing computer program is considered the world’s first computer virus, and he is considered to be the theoretical “father” of computer virology.[19] In 1972, Veith Risak directly building on von Neumann’s work on self-replication, published his article “Selbstreproduzierende Automaten mit minimaler Informationsübertragung” (Self-reproducing automata with minimal information exchange).[20] The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980 Jürgen Kraus wrote his diplom thesis “Selbstreproduktion bei Programmen” (Self-reproduction of programs) at the University of Dortmund.[21] In his work Kraus postulated that computer programs can behave in a way similar to biological viruses.

Science fiction

The first known description of a self-reproducing program in a short story occurs in 1970 in The Scarred Man by Gregory Benford [emphasis mine] which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hit a modem that is answered by another computer. It then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE.[22]

The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner.[23]

The 1973 Michael Crichton sci-fi movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok.[24] Alan Oppenheimer’s character summarizes the problem by stating that “…there’s a clear pattern here which suggests an analogy to an infectious disease process, spreading from one…area to the next.” To which the replies are stated: “Perhaps there are superficial similarities to disease” and, “I must confess I find it difficult to believe in a disease of machinery.”[25]

Scientific American has an October 19, 2001 article citing four different experts’ answer to the question “When did the term ‘computer virus’ arise?” Three of the experts cite academics as the source for the term (usually Fred Cohen). One of the experts does mention writers (for the most part, not the same writers cited in the Wikipedia entry quotation in the above).

One expert discusses the concept behind the term and confirms what most people will suspect. Interestingly, this expert’s origin story varies somewhat from the other three.

Computer virus concept

From “When did the term ‘computer virus’ arise?” (Joseph Motola response),

The concept behind the first malicious computer programs was described years ago in the Computer Recreations column of Scientific American. The metaphor of the “computer virus” was adopted because of the similarity in form, function and consequence with biological viruses that attack the human system. Computer viruses can insert themselves in another program, taking over control or adversely affecting the function of the program.

Like their biological counterparts, computer viruses can spread rapidly and self-replicate systematically. They also mimic living viruses in the way they must adapt through mutation [emphases mine] to the development of resistance within a system: the author of a computer virus must upgrade his creation in order to overcome the resistance (antiviral programs) or to take advantage of new weakness or loophole within the system.

Computer viruses also act like biologics [emphasis mine] in the way they can be set off: they can be virulent from the outset of the infection, or they can be activated by a specific event (logic bomb). But computer viruses can also be triggered at a specific time (time bomb). Most viruses act innocuous towards a system until their specific condition is met.

The computer industry has expanded the metaphor to now include terms like inoculation, disinfection, quarantine and sanitation [emphases mine]. Now if your system gets infected by a computer virus you can quarantine it until you can call the “virus doctor” who can direct you to the appropriate “virus clinic” where your system can be inoculated and disinfected and an anti-virus program can be prescribed.

More about Joseph Nechvatal and his work on viruses

The similarities between computer and biological viruses are striking and with that in mind, here’s a clip featuring part of viral symphOny,

Before giving you a second link to Nechvatal’s entire viral symphOny, here’s some context about him and his work, from the Joseph Nechvatal Wikipedia entry, (Note: Links have been removed),

He began using computers to make “paintings” in 1986 [11] and later, in his signature work, began to employ computer viruses. These “collaborations” with viral systems positioned his work as an early contribution to what is increasingly referred to as a post-human aesthetic.[12][13]

From 1991–1993 he was artist-in-residence at the Louis Pasteur Atelier in Arbois, France and at the Saline Royale/Ledoux Foundation’s computer lab. There he worked on The Computer Virus Project, which was an artistic experiment with computer viruses and computer animation.[14] He exhibited at Documenta 8 in 1987.[15][16]

In 1999 Nechvatal obtained his Ph.D. in the philosophy of art and new technology concerning immersive virtual reality at Roy Ascott’s Centre for Advanced Inquiry in the Interactive Arts (CAiiA), University of Wales College, Newport, UK (now the Planetary Collegium at the University of Plymouth). There he developed his concept of viractualism, a conceptual art idea that strives “to create an interface between the biological and the technological.”[17] According to Nechvatal, this is a new topological space.[18]

In 2002 he extended his experimentation into viral artificial life through a collaboration with the programmer Stephane Sikora of music2eye in a work called the Computer Virus Project II,[19] inspired by the a-life work of John Horton Conway (particularly Conway’s Game of Life), by the general cellular automata work of John von Neumann, by the genetic programming algorithms of John Koza and the auto-destructive art of Gustav Metzger.[20]

In 2005 he exhibited Computer Virus Project II works (digital paintings, digital prints, a digital audio installation and two live electronic virus-attack art installations)[21] in a solo show called cOntaminatiOns at Château de Linardié in Senouillac, France. In 2006 Nechvatal received a retrospective exhibition entitled Contaminations at the Butler Institute of American Art’s Beecher Center for Arts and Technology.[4]

Dr. Nechvatal has also contributed to digital audio work with his noise music viral symphOny [emphasis mine], a collaborative sound symphony created by using his computer virus software at the Institute for Electronic Arts at Alfred University.[22][23] viral symphOny was presented as a part of nOise anusmOs in New York in 2012.[24]

Here’s a link to the complete viral symphOny with his website here and his blog here.

ETA April 7, 2020 at 1135 PT: Joseph Nechvatal’s book review of Gustav Metzger’s collected writings (1953–2016) has just (April 2020) dropped at The Brooklyn Rail here:  https://brooklynrail.org/2020/04/art_books/Gustav-Metzgers-Writings.

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

The medical community and art/science: two events in Canada in November 2019

This time it’s the performing arts. I have one theatre and psychiatry production in Toronto and a music and medical science event in Vancouver.

Toronto’s Here are the Fragments opening on November 19, 2019

From a November 2, 2019 ArtSci Salon announcement (received via email),

An immersive theatre experience inspired by the psychiatric writing of Frantz Fanon

Here are the Fragments.
Co-produced by The ECT Collective and The Theatre Centre
November 19-December 1, 2019
Tickets: Preview $17 | Student/senior/arts worker $22 | Adult $30
Service charges may apply
Book 416-538-0988 | PURCHASE ONLINE

An immigrant psychiatrist develops psychosis and then schizophrenia. He walks a long path towards reconnection with himself, his son, and humanity.

Walk with him.

Within our immersive design (a fabric of sound, video, and live actors) lean in close to the possibilities of perceptual experience.

Schizophrenics ‘hear voices’. Schizophrenics fear loss of control over their own thoughts and bodies. But how does any one of us actually separate internal and external voices? How do we trust what we see or feel? How do we know which voices are truly our own?

Within the installation find places of retreat from chaos. Find poetry. Find critical analysis.

Explore archival material, Fanon’s writings and contemporary interviews with psychiatrists, neuroscientists, artists, and people living with schizophrenia, to reflect on the relationships between identity, history, racism and mental health.

I was able to find out more in a November 6, 2019 article at broadwayworld.com (Note: Some of this is repetitive),

How do we trust what we see or feel? How do we know which voices are truly our own? THE THEATRE CENTRE and THE ECT COLLECTIVE are proud to Co-produce HERE ARE THE FRAGMENTS., an immersive work of theatre written by Suvendrini Lena, Theatre Centre Residency artist and CAMH [ Centre for Addiction and Mental Health] Neurologist. Based on the psychiatric writing of famed political theorist Frantz Fanon and combining narratives, sensory exploration, and scientific and historical analysis, HERE ARE THE FRAGMENTS. reflects on the relationships between identity, history, racism, and mental health. FRAGMENTS. will run November 19 to December 1 at The Theatre Centre (Opening Night November 21).

HERE ARE THE FRAGMENTS. consists of live performances within an interactive installation. The plot, told in fragments, follows a psychiatrist early in his training as he develops psychosis and ultimately, treatment resistant schizophrenia. Eduard, his son, struggles to connect with his father, while the young man must also make difficult treatment decisions.

The Theatre Centre’s Franco Boni Theatre and Gallery will be transformed into an immersive interactive installation. The design will offer many spaces for exploration, investigation, and discovery, bringing audiences into the perceptual experience of Schizophrenia. The scenes unfold around you, incorporating a fabric of sound, video, and live actors. Amidst the seeming chaos there will also be areas of retreat; whispering voices, Fanon’s own books, archival materials, interviews with psychiatrists, neuroscientists, and people living with schizophrenia all merge to provoke analysis and reflection on the intersection of racism and mental health.

Suvendrini Lena (Writer) is a playwright and neurologist. She works as the staff neurologist at the Centre for Addiction and Mental Health and at the Centre for Headache at Women’s College Hospital [Toronto]. She is an Assistant Professor of Psychiatry and Neurology at the University of Toronto where she teaches medical students, residents, and fellows. She also teaches a course called Staging Medicine, a collaboration between The Theatre Centre and University of Toronto Postgraduate Medical Education.

Frantz Fanon (1925-1961), was a French West Indian psychiatrist, political philosopher, revolutionary, and writer, whose works are influential in the fields of post-colonial studies, critical theory, and Marxism. Fanon published numerous books, including Black Skin, White Masks (1952) and The Wretched of the Earth (1961).

In addition to performances, The Theatre Centre will host a number of panels and events. Highlights include a post-show talkback with Ngozi Paul (Development Producer, Artist/Activist) and Psychiatrist Collaborator Araba Chintoh on November 22. Also of note is Our Patients and Our Selves: Experiences of Racism Among Health Care Workers with facilitator Dr. Fatimah Jackson-Best of Black Health Alliance on November 23rd and Fanon Today: A Creative Symposium on November 24th, a panel, reading, and creative discussion featuring David Austin, Frank Francis, Doris Rajan and George Elliot Clarke [formerly Toronto’s Poet Laureate and Canadian Parliamentary Poet Laureate; emphasis and link mine].

You can get more details and a link for ticket purchase here.

Sounds and Science: Vienna meets Vancouver on November 30, 2019

‘Sounds and Science’ originated at the Medical University of Vienna (Austria) as the November 6, 2019 event posting on the University of British Columbia’s (UBC) Faculty of Medicine website,

The University of British Columbia will host the first Canadian concert bringing leading musical talents of Vienna together with dramatic narratives from science and medicine.

“Sounds and Science: Vienna Meets Vancouver” is part of the President’s Concert Series, to be held Nov. 30, 2019 on UBC campus. The event is modeled on a successful concert series launched in Austria in 2014, in cooperation with the Medical University of Vienna.

“Basic research tends to always stay within its own box, yet research is telling the most beautiful stories,” says Dr. Josef Penninger, director of UBC’s Life Sciences Institute, a professor of medical genetics and a Canada 150 Chair. “With this concert, we are bringing science out of the ivory tower, using the music of great composers such as Mozart, Schubert or Strauss to transport stories of discovery and insight into the major diseases that affected the composers themselves, and continue to have a significant impact on our society.”

Famous composers of the past are often seen as icons of classical music, but in fact, they were human beings, living under enormous physical constraints – perhaps more than people today, according to Dr. Manfred Hecking, an associate professor of internal medicine at the Medical University of Vienna.

“But ‘Sounds and Science’ is not primarily about suffering and disease,” says Dr. Hecking, a former member of the Vienna Philharmonic Orchestra who will be playing double bass during the concert. “It is a fun way of bringing music and science together. Combining music and thought, we hope that we will reach the attendees of the ‘Sounds and Science’ concert in Vancouver on an emotional, perhaps even personal level.”

A showcase for Viennese music, played in the tradition of the Vienna Philharmonic by several of its members, as well as the world-class science being done here at UBC, “Sounds and Science” will feature talks by UBC clinical and research faculty, including Dr. Penninger. Their topics will range from healthy aging and cancer research to the historical impact of bacterial infections.

Combining music and thought, we hope that we will reach the attendees of the ‘Sounds and Science’ concert in Vancouver on an emotional, perhaps even personal level.
Dr. Manfred Hecking

Faculty speaking at “Sounds and Science” will be:
Dr. Allison Eddy, professor and head, department of pediatrics, and chief, pediatric medicine, BC Children’s Hospital and BC Women’s Hospital;
Dr. Troy Grennan, clinical assistant professor, division of infectious diseases, UBC faculty of medicine;
Dr. Poul Sorensen, professor, department of pathology and laboratory medicine, UBC faculty of medicine; and
Dr. Roger Wong, executive associate dean, education and clinical professor of geriatric medicine, UBC faculty of medicine
UBC President and Vice-Chancellor Santa J. Ono and Vice President Health and Dr. Dermot Kelleher, dean, faculty of medicine and vice-president, health at UBC will also speak during the evening.

The musicians include two outstanding members of the Vienna Philharmonic – violinist Prof. Günter Seifert and violist-conductor Hans Peter Ochsenhofer, who will be joined by violinist-conductor Rémy Ballot and double bassist Dr. Manfred Hecking, who serves as a regular substitute in the orchestra.

For those in whose lives intertwine music and science, the experience of cross-connection will be familiar. For Dr. Penninger, the concert represents an opportunity to bring the famous sound of the Vienna Philharmonic to UBC and British Columbia, to a new audience. “That these musicians are coming here is a fantastic recognition and acknowledgement of the amazing work being done at UBC,” he says.

“Like poetry, music is a universal language that all of us immediately understand and can relate to. Science tells the most amazing stories. Both of them bring meaning and beauty to our world.”

“Sounds and Science” – Vienna Meets Vancouver is part of the President’s Concert Series | November 30, 2019 on campus at the Old Auditorium from 6:30 to 9:30 p.m.

To learn more about the Sounds and Science concert series hosted in cooperation with the Medical University of Vienna, visit www.soundsandscience.com.

I found more information regarding logistics,

Saturday, November 30, 2019
6:30 pm
The Old Auditorium, 6344 Memorial Road, UBC

Box office and Lobby: Opens at 5:30 pm (one hour prior to start of performance)
Old Auditorium Concert Hall: Opens at 6:00 pm

Sounds
Günter Seifert  VIOLIN
Rémy Ballot VIOLIN
Hans Peter Ochsenhofer VIOLA
Manfred Hecking DOUBLE BASS

Science
Josef Penninger GENETICS
Manfred Hecking INTERNAL MEDICINE
Troy Grennan INFECTIOUS DISEASE
Poul Sorensen PATHOLOGY & LABORATORY MEDICINE
Allison Eddy PEDIATRICS
Roger Wong GERIATRICS

Tickets are also available in person at UBC concert box-office locations:
– Old Auditorium
– Freddie Wood Theatre
– The Chan Centre for the Performing Art

General admission: $10.00
Free seating for UBC students
Purchase tickets for both President’s Concert Series events to make it a package, and save 10% on both performances

Transportation
Public and Bike Transportation
Please visit Translink for bike and transit information.
Parking
Suggested parking in the Rose Garden Parkade.

Buy Tickets

The Sounds and Science website has a feature abut the upcoming Vancouver concert and it offers a history dating from 2008,

MUSIC AND MEDICINE

The idea of combining music and medicine into the “Sounds & Science” – scientific concert series started in 2008, when the Austrian violinist Rainer Honeck played Bach’s Chaconne in d-minor directly before a keynote lecture, held by Nobel laureate Peter Doherty, at the Austrian Society of Allergology and Immunology’s yearly meeting in Vienna. The experience at that lecture was remarkable, truly a special moment. “Sounds & Science” was then taken a step further by bringing several concepts together: Anton Neumayr’s medical histories of composers, John Brockman’s idea of a “Third Culture” (very broadly speaking: combining humanities and science), and finally, our perception that science deserves a “Red Carpet” to walk on, in front of an audience. Attendees of the “Sounds & Science” series have also described that music opens the mind, and enables a better understanding of concepts in life and thereby science in general. On a typical concert/lecture, we start with a chamber music piece, continue with the pathobiography of the composer, go back to the music, and then introduce our main speaker, whose talk should be genuinely understandable to a broad, not necessarily scientifically trained audience. In the second half, we usually try to present a musical climax. One prerequisite that “Sounds & Science” stands for, is the outstanding quality of the principal musicians, and of the main speakers. Our previous concerts/lectures have so far covered several aspects of medicine like “Music & Cancer” (Debussy, Brahms, Schumann), “Music and Heart” (Bruckner, Mahler, Wagner), and “Music and Diabetes” (Bach, Ysaÿe, Puccini). For many individuals who have combined music and medicine or music and science inside of their own lives and biographies, the experience of a cross-connection between sounds and science is quite familiar. But there is also this “fun” aspect of sharing and participating, and at the “Sounds & Science” events, we usually try to ensure that the event location can easily be turned into a meeting place.

At a guess, Science and Sounds started informally in 2008 and became a formal series in 2014.

There is a video but it’s in German. It’s enjoyable viewing with beautiful music but unless you have German language skills you won’t get the humour. Also it runs for over 9 minutes (a little longer than most of videos you’ll find here on FrogHeart),

Enjoy!

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.