The Canadian Institute for Advanced Research (CIFAR) is investigating the ‘Future of Being Human’ and has instituted a global call for proposals but there is one catch, your team has to have one person (with or without citizenship) who’s living and working in Canada. (Note: I am available.)
New program proposals should explore the long term intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet.
We invite bold proposals from researchers at universities or research institutions that ask new questions about our complex emerging world. We are confronting challenging problems that require a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences [emphasis mine]) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity.
CIFAR is committed to creating a more diverse, equitable, and inclusive environment. We welcome proposals that include individuals from countries and institutions that are not yet represented in our research community.
Here’s a description, albeit, a little repetitive, of what CIFAR is asking researchers to do (from the Program Guide [PDF]),
For CIFAR’s next Global Call for Ideas, we are soliciting proposals related to The Future of Being Human, exploring in the long term the intersection of humans, science and technology, social and cultural systems, and our environment. Our understanding of the natural world around us, and new insights into individual and societal behaviour, have the potential to provide enormous benefits to humanity and the planet. We invite bold proposals that ask new questions about our complex emerging world, where the issues under study are entangled and dynamic. We are confronting challenging problems that necessitate a diverse team incorporating multiple disciplines (potentially spanning the humanities, social sciences, arts, physical sciences, and life sciences) to engage in a sustained dialogue to develop new insights, and change the conversation on important questions facing science and humanity. [p. 2 print; p. 4 PDF]
I stumbled across this event on my Twitter feed (h/t @katepullinger; Note: Kate Pullinger is a novelist and Professor of Creative Writing and Digital Media, Director of the Centre for Cultural and Creative Industries [CCCI] at Bath Spa University in the UK).
Anyone who visits here with any frequency will have noticed I have a number of articles on technology and the body (you can find them in the ‘human enhancement’ category and/or search fro the machine/flesh tag). Boddington’s view is more expansive than the one I’ve taken and I welcome it. First, here’s the event information and, then, a link to her open access paper from February 2021.
This year’s CCCI Public Lecture will be given by Ghislaine Boddington. Ghislaine is Creative Director of body>data>space and Reader in Digital Immersion at University of Greenwich. Ghislaine has worked at the intersection of the body, the digital, and spatial research for many years. This will be her first in-person appearance since the start of the pandemic, and she will share with us the many insights she has gathered during this extraordinary pivot to online interfaces much of the world has been forced to undertake.
With a background in performing arts and body technologies, Ghislaine is recognised as a pioneer in the exploration of digital intimacy, telepresence and virtual physical blending since the early 90s. As a curator, keynote speaker and radio presenter she has shared her outlook on the future human into the cultural, academic, creative industries and corporate sectors worldwide, examining topical issues with regards to personal data usage, connected bodies and collective embodiment. Her research led practice, examining the evolution of the body as the interface, is presented under the heading ‘The Internet of Bodies’. Recent direction and curation outputs include “me and my shadow” (Royal National Theatre 2012), FutureFest 2015-18 and Collective Reality (Nesta’s FutureFest / SAT Montreal 2016/17). In 2017 Ghislaine was awarded the international IX Immersion Experience Visionary Pioneer Award. She recently co-founded University of Greenwich Strategic Research Group ‘CLEI – Co-creating Liveness in Embodied Immersion’ and is an Associate Editor for AI & Society (Springer). Ghislaine is a long term advocate for diversity and inclusion, working as a Trustee for Stemette Futures and Spokesperson for Deutsche Bank ‘We in Social Tech’ initiative. She is a team member and presenter with BBC World Service flagship radio show/podcast Digital Planet.
Date and time
Thu, 24 June 2021 08:00 – 09:00 [am] PDT
Boddington’s paper is what ignited my interest; here’s a link to and a citation for it,
The Weave—virtual physical presence design—blending processes for the future
Coming from a performing arts background, dance led, in 1989, I became obsessed with the idea that there must be a way for us to be able to create and collaborate in our groups, across time and space, whenever we were not able to be together physically. The focus of my work, as a director, curator and presenter across the last 30 years, has been on our physical bodies and our data selves and how they have, through the extended use of our bodies into digitally created environments, started to merge and converge, shifting our relationship and understanding of our identity and our selfhood.
One of the key methodologies that I have been using since the mid-1990s is inter-authored group creation, a process we called The Weave (Boddington 2013a, b). It uses the simple and universal metaphor of braiding, plaiting or weaving three strands of action and intent, these three strands being:
1. The live body—whether that of the performer, the participant, or the public;
2. The technologies of today—our tools of virtually physical reflection;
3. The content—the theme in exploration.
As with a braid or a plait, the three strands must be weaved simultaneously. What is key to this weave is that in any co-creation between the body and technology, the technology cannot work without the body; hence, there will always be virtual/physical blending. [emphasis mine]
Cyborg culture is also moving forward at a pace with most countries having four or five cyborgs who have reached out into media status. Manel Munoz is the weather man as such, fascinated and affected by cyclones and anticyclones, his back of the head implant sent vibrations to different sides of his head linked to weather changes around him.
Neil Harbisson from Northern Ireland calls himself a trans-species rather than a cyborg, because his implant is permanently fused into the crown of his head. He is the first trans-species/cyborg to have his passport photo accepted as he exists with his fixed antenna. Neil has, from birth, an eye condition called greyscale, which means he only sees the world in grey and white. He uses his antennae camera to detect colour, and it sends a vibration with a different frequency for each colour viewed. He is learning what colours are within his viewpoint at any given time through the vibrations in his head, a synaesthetic method of transference of one sense for another. Moon Ribas, a Spanish choreographer and a dancer, had two implants placed into the top of her feet, set to sense seismic activity as it occurs worldwide. When a small earthquake occurs somewhere, she received small vibrations; a bigger eruption gives her body a more intense vibration. She dances as she receives and reacts to these transferred data. She feels a need to be closer to our earth, a part of nature (Harbisson et al. 2018).
Medical, non medical and sub-dermal implants
Medical implants, embedded into the body or subdermally (nearer the surface), have rapidly advanced in the last 30 years with extensive use of cardiac pacemakers, hip implants, implantable drug pumps and cochlear implants helping partial deaf people to hear.
Deep body and subdermal implants can be personalised to your own needs. They can be set to transmit chosen aspects of your body data outwards, but they also can receive and control data in return. There are about 200 medical implants in use today. Some are complex, like deep brain stimulation for motor neurone disease, and others we are more familiar with, for example, pacemakers. Most medical implants are not digitally linked to the outside world at present, but this is in rapid evolution.
Kevin Warwick, a pioneer in this area, has interconnected himself and his partner with implants for joint use of their personal and home computer systems through their BrainGate (Warwick 2008) implant, an interface between the nervous system and the technology. They are connected bodies. He works onwards with his experiments to feel the shape of distant objects and heat through fingertip implants.
‘Smart’ implants into the brain for deep brain stimulation are in use and in rapid advancement. The ethics of these developments is under constant debate in 2020 and will be onwards, as is proved by the mass coverage of the Neuralink, Elon Musk’s innovation which connects to the brain via wires, with the initial aim to cure human diseases such as dementia, depression and insomnia and onwards plans for potential treatment of paraplegia (Musk 2016).
Given how many times I’ve featured art/sci (also know as, art/science and/or sciart) and cyborgs and medical implants here, my excitement was a given.
For anyone who wants to pursue Boddington’s work further, her eponymous website is here, the body>data>space is here, and her University of Greenwich profile page is here.
For anyone interested in the Centre for Creative and Cultural Industries (CCCI), their site is here.
The report was launched by 221 A, a Vancouver (Canada)-based arts and culture organization and funded by the Canada Council for the Arts through their Digital Strategy Fund. Here’s more from the BACP report in the voice of its research leader, Jesse McKee,
… The blockchain is the openly readable and unalterable ledger technology, which is most broadly known for supporting such applications as bitcoin and other cryptocurrencies. This report documents the first research phase in a three-phased approach to establishing our digital strategy [emphasis mine], as we [emphasis mine] learn from the blockchain development communities. This initiative’s approach is an institutional one, not one that is interpreting the technology for individuals, artists and designers alone. The central concept of the blockchain is that exchanges of value need not rely on centralized authentication from institutions such as banks, credit cards or the state, and that this exchange of value is better programmed and tracked with metadata to support the virtues, goals and values of a particular network. This concept relies on a shared, decentralized and trustless ledger. “Trustless” in the blockchain community is an evolution of the term trust, shifting its signification as a contract usually held between individuals, managed and upheld by a centralized social institution, and redistributing it amongst the actors in a blockchain network who uphold the platform’s technical operational codes and can access ledgers of exchange. All parties involved in the system are then able to reach a consensus on what the canonical truth is regarding the holding and exchange of value within the system.
… [from page 6 of the report]
McKee manages to keep the report from floating away in a sea of utopian bliss with some cautionary notes. Still, as a writer I’m surprised he didn’t notice that ‘blockchain‘ which (in English) is supposed to ‘unlock padlocks’ poses a linguistic conundrum if nothing else.
This looks like an interesting report but it’s helpful to have some ‘critical theory’ jargon. That said, the bulk of the report is relatively accessible reading although some of the essays (at the end) from the artist-researchers are tough going.
One more thought, the report does present many exciting and transformative possibilities and I would dearly love to see much of this come to pass. I am more hesitant than McKee and his colleagues and that hesitation is beautifully described in an essay (The Vampire Problem: Illustrating the Paradox of Transformative Experience) first published September 3, 2017 by Maria Popova (originally published on Brain Pickings),
To be human is to suffer from a peculiar congenital blindness: On the precipice of any great change, we can see with terrifying clarity the familiar firm footing we stand to lose, but we fill the abyss of the unfamiliar before us with dread at the potential loss rather than jubilation over the potential gain of gladnesses and gratifications we fail to envision because we haven’t yet experienced them. …
Arts and blockchain events in Vancouver
The 221 A launch event for the report kicked off a series of related events, here’s more from a 221 A May 17, 2021 news release (Note: the first and second events have already taken place),
Please join us for a live stream events series bringing together key contributors of the Blockchains & Cultural Padlocks Research Report alongside a host of leading figures across academic, urbanism, media and blockchain development communities.
The Vancouver Biennale folks first sent me information about Voxel Bridge in 2018 but this new material is the most substantive description yet, even without an opening date. From a June 6, 2021 article by Kevin Griffin for the Vancouver Sun (Note: Links have been removed),
The underside of the Cambie Bridge is about to be transformed into the unique digital world of Voxel Bridge. Part of the Vancouver Biennale, Voxel Bridge will exist both as a physical analogue art work and an online digital one.
The public art installation is by Jessica Angel. When it’s fully operational, Voxel Bridge will have several non-fungible tokens called NFTs that exist in an interactive 3-D world that uses blockchain technology. The intention is to create a fully immersive installation. Voxel Bridge is being described as the largest digital public art installation of its kind.
“To my knowledge, nothing has been done at this scale outdoors that’s fully interactive,” said Sammi Wei, the Vancouver Biennale‘s operations director. “Once the digital world is built in your phone, you’ll be able to walk around objects. When you touch one, it kind of vibrates.”
Just as a pixel refers to a point in a two-dimensional world, voxel refers to a similar unit in a 3-D world.
Voxel Bridge will be about itself: it will tell the story of what it means to use new decentralized technology called blockchain to create Voxel Bridge.
There are a few more Voxel Bridge details in a June 7, 2021 article by Vincent Plana for the Daily Hive,
… Voxel Bridge draws parallels between blockchain technology and the structural integrity of the underpass itself. The installation will be created by using adhesive vinyl and augmented reality technology.
Gfiffin’s description in his June 6, 2021 article gives you a sense of what it will be like to become immersed in Voxel Bridge,
Starting Monday [June 14, 2021], a crew will begin installing a vinyl overlay directly on the architecture on the underside of the bridge deck, around the columns, and underfoot on the sidewalk from West 2nd to the parking-lot road. Enclosing a space of about 18,000 square feet, the vinyl layer will be visible without any digital enhancement. It will look like an off-kilter circuit board.
“It’ll be like you’re standing in the middle of a circuit board,” [emphasis mine] she said. “At the same time, the visual perception will be slightly off. It’s like an optical illusion. You feel the ground is not quite where it’s supposed to be.”
Since posting about Science Odyssey, I have received a number of emails announcing event and not all of them are part of the Odyssey experience.
From the looks of things, May 2021 is going to be a very busy month. Given how early it is in the month I expect to receive another batch of notices and most likely will post another May 2021 events roundup.
At this point, there’s a heavy emphasis on architecture (human and other) and design.
Proximal Spaces on May 3, 2021
This is one of those event within an event notices. There’s a festival: FACTT 20/21 – Improbable Times. Trans-disciplinary & Trans-national Festival of Art & Science in Portugal and within the festival there is Proximal Spaces in Toronto, Canada. Here’s more from the ArtScience Salon (ArtSci Salon) May 1, 2021 announcement (received via email),
May 3, 2021 – 3.00 PM (EST) [12 pm PST]
Join us at this poetry reading by six Canadian artists responding to the work of eight bioartists. Event with be streamed on Facebook Live.
Please note that you don’t need to sign up in order to access the streaming as it is public.
Proximal Spaces’ is a multi-modal exhibition that explores the environment at multiple scales in concentric circles of proximity to the body. Inspired by Edward Hall’s [Edward Twitchell Hall or E. T. Hall] 1961 notation of intimate (1.5ft), personal (4ft), social (12ft) and public (25ft) spaces in his “Proxemics” diagrams, the installation portion presents similar diagrams of his concentric circles affixed to the wall of the gallery space, as well as developed in Augmented Reality around the venue. Each of these diagrams is a montage of microscopic and sub-microscopic images of the everyday environment as experienced by a collaborative team of international bioartists, and arrayed in a fractal form. In addition, an AR-enabled application explores the invisible environments of computer generated bioaerosols suspended in the air of virtual space.
This work visualizes the variegated response of the biological environment to unprecedented levels of physical distancing and self-isolation and recent developments in vaccine design that impact our understanding of interpersonal and interspecies ‘messaging’. What continues to thrive in the 6ft ‘dead spaces’ between us? What invisible particles linger on and create a biological archive through our movements through space? The artwork presents an interesting mode of interspecies engagement through hybrid virtual and physical interaction.
In the spring of 2021, six Canadian poets – Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart – came together to pursue a lyric response to Proximal Spaces. They were challenged and inspired by the virtual exhibition with its combination of art, science, and proxemics. The focus of the artworks – what inhabits and thrives in the spaces and environments where we live, work, and breathe—generated six distinctive poems.
Poets: Kelley Aitken, nancy viva davis halifax, Maureen Hynes, Anita Lahey, Dilys Leman, & Sheila Stewart
Bioartists: Roberta Buiani, Nathalie Dubois Calero, Sarah Choukah, Nicole Clouston, Jess Holtz, Mick Lorusso, Maro Pebo, Felipe Shibuya
This project is part of FACTT-Improbable Times (http://factt.arteinstitute.org/), a project spearheaded and promoted by the Arte Institute we are in or production and conception partners with Cultivamos Cultura and Ectopia (Portugal), InArts Lab@Ionian University (Greece), ArtSci Salon@The Fields Institute and Sensorium@York University (Canada), School of Visual Arts (USA), UNAM [National Autonomous University of Mexico], Arte+Ciência and Bioscénica (Mexico), and Central Academy of Fine Arts (China). Together we will work and bring into being our ideas and actions for this during the year of 2021!
Morphogenesis: Geometry, Physics, and Biology on May 5, 2021
i love this image, he seems so delighted to show off the bug (?),
Here’s more from the Perimeter Institute for Theoretical Physics (PI) April 30, 2021 announcement (received via email),
Earth is home to millions of different species – from simple plants and unicellular organisms to trees and whales and humans. The incredible diversity of life on Earth led Charles Darwin to lament that it is “enough to drive the sanest man mad.”
How can we make sense of this diversity of form, which arises from the process of morphogenesis that links molecular- and cellular-level processes to conspire and lead to the emergence of “endless forms most beautiful,” as Darwin said?
In his May 5  lecture webcast, Harvard professor L. Mahadevan [Lakshminarayanan Mahadevan] will take viewers on a journey into the mathematical, physical, and biological workings of morphogenesis to demonstrate how scientists are beginning to unlock many of the secrets that have vexed scientists since Darwin.
Possible Worlds: “How Will We Live Together?” on May 6, 2021
For those who are interested in human architecture, there’s this from a May 3, 3021 Berggruen institute announcement (received via email) about a talk by Chilean architect and 2016 Pritzker Prize winner, Alejandro Gastón Aravena Mori (Alejandro Aravena),
Possible Worlds: How Will We Live Together
May 6, 2021
11am — Virtual
Possible Worlds: The UCLA [University of California at Los Angeles] – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute.
Please click here to submit a question to Alejandro Aravena
About Alejandro Aravena Alejandro Aravena is an architect, founder and executive director of the firm Elemental. His works include the “Siamese Towers” at the Catholic University of Chile and the Novartis office campus in Shanghai. In 2016, the New York Times named Aravena one of the world’s “creative geniuses” who had helped define culture. He and Elemental have received numerous honors, including the 2016 Pritzker Architecture Prize, the 2015 London Design Museum’s Design of the Year award and the 2011 Index Award. Aravena currently serves as the president of the Pritzker Prize jury. Aravena’s lecture title, “How Will We Live Together?” echoes the theme of the upcoming international architecture exhibition, Biennale Architettura, in which Elemental will be participating.
Featuring a discussion with moderator Dana Cuff
Dana Cuff is Professor of Architecture and Urban Design at UCLA, where she is also Director of cityLAB, an award-winning think tank that advances goals of spatial justice through experimental urbanism and architecture (www.cityLAB.aud.ucla.edu). Since receiving her Ph.D. in Architecture from Berkeley, Cuff has published and lectured widely about affordable housing, the architectural profession, and Los Angeles’ urban history. She is author of several books, including The Provisional City about postwar housing in L.A., and a co-authored book called Urban Humanities: New Practices for Reimagining the City, documenting her collaborative, crossdisciplinary research and teaching at UCLA funded by the Mellon Foundation. Based on cityLAB’s design research, Cuff co-authored landmark legislation that permits “backyard homes” on some 8.1 million single-family properties, doubling the density of suburbs across California (AB 2299, Bloom-2016). In 2019, cityLAB opened a satellite center in the MacArthur Park/Westlake neighborhood where a deep, multi-year exchange with community organizations is already demonstrating ways that humanistic design of the public realm can create more compassionate cities. Cuff recently received three awards that describe her career: Women in Architecture Activist of the Year (2019, Architectural Record); Distinguished Leadership in Architectural Research (2020, ARCC); and Educator of the Year (2021, American Institute of Architects Los Angeles).
About the Series Possible Worlds: The UCLA – Berggruen Institute Speaker Series is a new partnership between the UCLA Division of Humanities and the Berggruen Institute. This semiannual series will bring some of today’s most imaginative intellectual leaders and creators to deliver public talks on the future of humanity. Through the lens of their singular achievements and experiences, these trailblazers in creativity, innovation, philosophy and politics will lecture on provocative topics that explore current challenges and transformations in human progress.
UCLA faculty and students have long been at the forefront of interpreting the world’s legacy of language, literature, art and science. UCLA Humanities serves a vital role in readying future leaders to articulate their thoughts with clarity and imagination, to interpret the world of ideas, and to live as informed citizens in an increasingly complex world. We are proud to be partnering in this lecture series with the Berggruen Institute, whose work addresses the “Great Transformations” taking place in technology and culture, politics and economics, global power arrangements, and even how we perceive ourselves as humans. The Institute seeks to connect deep thought in the human sciences — philosophy and culture — to the pursuit of practical improvements in governance.
A selection committee comprising representatives of UCLA and the Berggruen Institute has been formed to make recommendations for lecturers. The committee includes:
• Ursula Heise, Professor and Chair, Department of English; Professor, UCLA Institute of the Environment and Sustainability; Marcia H. Howard Term Chair in Literary Studies • Pamela Hieronymi, Professor of Philosophy • Anastasia Loukaitou-Sideris, Professor of Urban Planning; Associate Provost for Academic Planning • Todd Presner, Associate Dean, Digital Initiatives; Chair of the Digital Humanities Program; Michael and Irene Ross Endowed Chair of Yiddish Studies; Professor of Germanic Languages and Comparative Literature • Lynn Vavreck, Professor, Department of Political Science; Marvin Hoffenberg Professor of American Politics and Public Policy • David Schaberg, Senior Dean of the UCLA College; Dean of Humanities; Professor, Asian Languages & Cultures • Nils Gilman, Vice President of Programs, the Berggruen Institute
Generative Art and Computational Creativity starts May 7, 2021
A Spring 2021 MetaCreation Lab (Simon Fraser University; SFU) newsletter (received via email on April 23, 2021) highlights a number of festival submissions and papers along with some news about a free introductory course. First, the video introduction to the course,
This first course in the two-part program, Generative Art and Computational Creativity [there’s a fee for part two], proposes an introduction and overview of the history and practice of generative arts and computational creativity with an emphasis on the formal paradigms and algorithms used for generation. The full program will be taught by Associate Professor from the School of Interactive Arts and Technology at Simon Fraser University and multi-disciplinary researcher, Philippe Pasquier.
On the technical side, we will study core techniques from mathematics, artificial intelligence, and artificial life that are used by artists, designers and musicians across the creative industry. We will start with processes involving chance operations, chaos theory and fractals and move on to see how stochastic processes, and rule-based approaches can be used to explore creative spaces. We will study agents and multi-agent systems and delve into cellular automata, and virtual ecosystems to explore their potential to create novel and valuable artifacts and aesthetic experiences.
The presentation is illustrated by numerous examples from past and current productions across creative practices such as visual art, new media, music, poetry, literature, performing arts, design, architecture, games, robot-art, bio-art and net-art. Students get to practice these algorithms first hand and develop new generative pieces through assignments and projects in MAX. Finally, the course addresses relevant philosophical, and societal debates associated with the automation of creative tasks.
Music for this course was composed with the StyleMachineLite Max for Live engine of Metacreative Inc.
Artistic direction: Philippe Pasquier, Programmation: Arne Eigenfeldt, Sound Production: Philippe Bertrand
This course is in adaptive mode and is open for enrollment. Learn more about adaptive courses here.
Session 1: Introduction and Typology of Generative Art (May 7, 2021) To start off this course, we define generative art and computational creativity and discuss how these relate through the study of prominent examples. We establish a typology of generative systems based on levels of autonomy and agency.
Session 2: History Of Generative Art, Chance Operations, and Chaos Theory (May 14, 2021) Generative art is nothing new, and this session goes through the history of the field from pre-history to the popularization of computers. We study chance, noise, fractals, chaos theory, and their applications in visual art and music.
Session 3: Rule-Based Systems, Grammars and Markov Chains (May 21, 2021) This session introduces and illustrate the generative potential of rule-based and expert systems. We study generative grammars through the Chomsky hierarchy, and introduce L-systems, shape grammars, and Markov chains. We discuss how these have been applied in visual art, music, design, architecture, and electronic literature.
Session 4: Cognitive Agents And Multiagent Systems (May 28, 2021) This session introduces the concepts underlying the notion of artificial agents. We study the belief, desire, and intention (BDI) cognitive architecture, and message based agent communication resting on the speech act theory. We discuss musical agents, conversational agents, chat bots and twitter bots and their artistic potential.
Session 5: Reactive Agents And Multiagent Systems (June 4, 2021) In this session, we introduce reactive agents and the subsumption architecture. We study boids, and detail how complex behaviors can emerge from a distributed population of simple artificial agents. We look at a myriad of applications from ant painting to swarm music and we discuss artistic approaches to virtual ecosystems.
Session 6: A-Life And Cellular Automaton (June 11, 2021) In this concluding session, we introduce artificial life (A-life). We study cellular automaton, multi-agent ecosystems for music, visual art, non-photorealistic rendering, and gaming. The session also concludes the class by reflecting on the state of the art in the field and its consequences on creative practices.
The human being – so fragile, so ethereal, speaking a sweet language. A piece of architecture – so physically imminent, so solid, speaking a language of hardness.
Photo by Oliviero Godi – Frantoio Ipogeo nel Salento
Join photographer & architect Oliviero Godi as he explores the relationship between the body & the material, the transient & the permanent, in search of the correct balance where neither element prevails.
To make your donation, please send an e-transfer to email@example.com. Thank you!
Learn More [about this other upcoming Cultural Events]
Respiration and the Brain on May 25, 2021
Before getting to the April 29, 2021 BrainTalks announcement, here’s a little bit about BrainTalks from their webspace on the University of British Columbia (UBC) website,
BrainTalks is a series of talks inviting you to contemplate emerging research about the brain. Researchers studying the brain, from various disciplines including psychiatry, neuroscience, neuroimaging, and neurology, gather to discuss current leading edge topics on the mind.
As an audience member, you join the discussion at the end of the talk, both in the presence of the entire audience, and with an opportunity afterwards to talk with the speaker more informally in a catered networking session. The talks also serve as a connecting place for those interested in similar topics, potentially launching new endeavours or simply connecting people in discussions on how to approach their research, their knowledge, or their clinical practice.
For the general public, these talks serve as a channel where by knowledge usually sequestered in inaccessible journals or university classrooms, is now available, potentially allowing people to better understand their brains and minds, how they work, and how to optimize brain health.
[UBC School of Medicine Department of Psychiatry]
Onto the April 29, 2021 BrainTalks announcement (received via email),
BrainTalks: Respiration and the Brain
Tuesday, May 25th, 2021 from 6:00 PM – 7:30 PM [PT]
Join us for a series of online talks exploring questions of respiration and the brain. Emerging empirical research will be presented on ventilation-associated brain injury and breathing-based interventions for the treatment of stress and anxiety disorders. We presenters will include Dr. Thiago Bassi, Dr. Lloyd Lalande and Taylor Willi, MSc.
Dr. Thiago Bassi will address the biological connection between the brain and lungs, exploring the potential adverse effects of mechanical ventilation on the brain. Dr. Bassi is a neurosurgeon and neuroscientist, who worked clinically for more than ten years in Brazil. He joined the Lungpacer Medical team and C2B2 lab in 2017, and is currently completing his doctorate in Biomedicine Physiology at Simon Fraser University.
Dr. Lloyd Lalande will describe Guided Respiration Mindfulness Therapy (GRMT), as an emerging clinical breathwork intervention for its effectiveness in reducing depression, anxiety and stress, and in increasing mindfulness and sense of wellbeing. Dr. Lalonde is an Assistant Professor teaching psychology at the Buddhist TzuChi University of Science and Technology, and the developer of GRMT. His current research is based out of the TzuChi Buddhist General Hospital, investigating GRMT as an evidence-based treatment for a variety of outcomes.
Mr. Taylor Willi will present the findings of his dissertation research comparing the effect of performing daily brief relaxation techniques on measures of stress and anxiety. Mr. Willi completed a Masters Degree of Neuroscience at the University of British Columbia, and is currently completing his doctorate in Clinical Psychology at Simon Fraser University.
Each of the speakers will present an overview of their research findings investigating respiration in three unique ways. Following their presentations, the speakers will be available for an audience-drive panel discussion.
Back in January 2019 I got an email from my good friend and colleague Lance Gharavi with the title “Podcast brainstorming.” Two years on, we’ve just launched the Mission: Interplanetary podcast–and it’s amazing!
It’s been a long journey — especially with a global pandemic thrown in along the way — but on March 23 , the Mission: Interplanetary podcast with Slate and ASU finally launched.
After two years of planning, many discussions, a bunch dry runs, and lots (and by that I mean lots) of Zoom meetings, we are live!
As the team behind the podcast talked about and developed the ideas underpinning the Mission: Interplanetary,we were interested in exploring new ways of thinking and talking about the future of humanity as a space-faring species as part of Arizona State University’s Interplanetary Initiative. We also wanted to go big with these conversations — really big!
And that is exactly what we’ve done in this partnership with Slate.
The guests we’re hosting, the conversations we have lined up, the issues we grapple with, are all literally out of this world. But don’t just take my word for it — listen to the first episode above with the incredible Lindy Elkins-Tanton talking about NASA’s mission to the asteroid 16 Psyche.
So if you’re looking for a space podcast with a difference, and one that grapples with big questions around our space-based future, please do subscribe on your favorite podcast platform. And join me and the fabulous former NASA astronaut Cady Coleman as we explore the future of humanity in space.
Cady Coleman is a former NASA astronaut and Air Force colonel. She flew aboard the International Space Station on a six-month expedition as the lead science and robotics officer. A frequent speaker on space and STEM topics, Coleman is also a musician who’s played from space with the Chieftains and Ian Anderson of Jethro Tull.
Happy listening. And, I apologize for the awkward links.
Event Rap Kickstarter
Baba Brinkman’s April 27, 2021 email notice has this to say about his latest venture,
Join the Movement, Get Rewards
My new Kickstarter campaign for Event Rap is live as of right now! Anyone who backs the project is helping to launch an exciting new company, actually a new kind of company, the first creator marketplace for rappers. Please take a few minutes to read the campaign description, I put a lot of love into it.
The campaign goal is to raise $26K in 30 days, an average of $2K per artist participating. If we succeed, this platform becomes a new income stream for independent artists during the pandemic and beyond. That’s the vision, and I’m asking for your help to share it and support it.
But instead of why it matters, let’s talk about what you get if you support the campaign!
$10-$50 gets you an advance copy of my new science rap album, Bright Future. I’m extremely proud of this record, which you can preview here, and Bright Future is also a prototype for Event Rap, since all ten of the songs were commissioned by people like you.
$250 – $500 gets you a Custom Rap Video written and produced by one of our artists, and you have twelve artists and infinite topics to choose from. This is an insanely low starting price for an original rap video from a seasoned professional, and it applies only during the Kickstarter. What can the video be about? Anything at all. You choose!
$750 – $1,500 gets you a live rap performance at your virtual event. This is also an amazingly low price, especially since you can choose to have the artist freestyle interactively with your audience, write and perform a custom rap live, or best of all compose a “Rap Up” summary of the event, written during the event, that the artist will perform as the grand finale.
That’s about as fresh and fun as rap gets.
$3,000 – $5,000 the highest tiers bring the highest quality, a brand new custom-written, recorded, mixed and mastered studio track, or studio track plus full rap music video, with an exclusive beat and lyrics that amplify your message in the impactful, entertaining way that rap does best.
I know this higher price range isn’t for everyone, but check out some of the music videos our artists have made, and maybe you can think of a friend to send this to who has a budget and a worthy cause.
Markus Buehler and his musical spider webs are making news again.
The image (so pretty) you see in the above comes from a Markus Buehler presentation that was made at the American Chemical Society (ACS) meeting. ACS Spring 2021 being held online April 5-30, 2021. The image was also shown during a press conference which the ACS has made available for public viewing. More about that later in this posting.
Spiders are master builders, expertly weaving strands of silk into intricate 3D webs that serve as the spider’s home and hunting ground. If humans could enter the spider’s world, they could learn about web construction, arachnid behavior and more. Today, scientists report that they have translated the structure of a web into music, which could have applications ranging from better 3D printers to cross-species communication and otherworldly musical compositions.
The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30 . Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30 . The meeting features nearly 9,000 presentations on a wide range of science topics.
“The spider lives in an environment of vibrating strings,” says Markus Buehler, Ph.D., the project’s principal investigator, who is presenting the work. “They don’t see very well, so they sense their world through vibrations, which have different frequencies.” Such vibrations occur, for example, when the spider stretches a silk strand during construction, or when the wind or a trapped fly moves the web.
Buehler, who has long been interested in music, wondered if he could extract rhythms and melodies of non-human origin from natural materials, such as spider webs. “Webs could be a new source for musical inspiration that is very different from the usual human experience,” he says. In addition, by experiencing a web through hearing as well as vision, Buehler and colleagues at the Massachusetts Institute of Technology (MIT), together with collaborator Tomás Saraceno at Studio Tomás Saraceno, hoped to gain new insights into the 3D architecture and construction of webs.
With these goals in mind, the researchers scanned a natural spider web with a laser to capture 2D cross-sections and then used computer algorithms to reconstruct the web’s 3D network. The team assigned different frequencies of sound to strands of the web, creating “notes” that they combined in patterns based on the web’s 3D structure to generate melodies. The researchers then created a harp-like instrument and played the spider web music in several live performances around the world.
The team also made a virtual reality setup that allowed people to visually and audibly “enter” the web. “The virtual reality environment is really intriguing because your ears are going to pick up structural features that you might see but not immediately recognize,” Buehler says. “By hearing it and seeing it at the same time, you can really start to understand the environment the spider lives in.”
To gain insights into how spiders build webs, the researchers scanned a web during the construction process, transforming each stage into music with different sounds. “The sounds our harp-like instrument makes change during the process, reflecting the way the spider builds the web,” Buehler says. “So, we can explore the temporal sequence of how the web is being constructed in audible form.” This step-by-step knowledge of how a spider builds a web could help in devising “spider-mimicking” 3D printers that build complex microelectronics. “The spider’s way of ‘printing’ the web is remarkable because no support material is used, as is often needed in current 3D printing methods,” he says.
In other experiments, the researchers explored how the sound of a web changes as it’s exposed to different mechanical forces, such as stretching. “In the virtual reality environment, we can begin to pull the web apart, and when we do that, the tension of the strings and the sound they produce change. At some point, the strands break, and they make a snapping sound,” Buehler says.
The team is also interested in learning how to communicate with spiders in their own language. They recorded web vibrations produced when spiders performed different activities, such as building a web, communicating with other spiders or sending courtship signals. Although the frequencies sounded similar to the human ear, a machine learning algorithm correctly classified the sounds into the different activities. “Now we’re trying to generate synthetic signals to basically speak the language of the spider,” Buehler says. “If we expose them to certain patterns of rhythms or vibrations, can we affect what they do, and can we begin to communicate with them? Those are really exciting ideas.”
You can go here for the April 12, 2021 ‘Making music from spider webs’ ACS press conference’ it runs about 30 mins. and you will hear some ‘spider music’ played.
Getting back to the image and spider webs in general, we are most familiar with orb webs (in the part of Canada where I from if nowhere else), which look like spirals and are 2D. There are several other types of webs some of which are 3D, like tangle webs, also known as cobwebs, funnel webs and more. See this March 18, 2020 article “9 Types of Spider Webs: Identification + Pictures & Spiders” by Zach David on Beyond the Treat for more about spiders and their webs. If you have the time, I recommend reading it.
I’ve been following Buehler’s spider web/music work for close to ten years now; the latest previous posting is an October 23, 2019 posting where you’ll find a link to an application that makes music from proteins (spider webs are made up of proteins; scroll down about 30% of the way; it’s in the 2nd to last line of the quoted text about the embedded video).
Here is a video (2 mins. 17 secs.) of a spider web music performance that Buehler placed on YouTube,
Feb 3, 2021
Markus J. Buehler
Spider’s Canvas/Arachonodrone show excerpt at Palais de Tokyo, Paris, on November 2018. Video by MIT CAST. More videos can be found on www.arachnodrone.com. The performance was commissioned by Studio Tomás Saraceno (STS), in the context of Saraceno’s carte blanche exhibition, ON AIR. Spider’s Canvas/Arachnodrone was performed by Isabelle Su and Ian Hattwick on the spider web instrument, Evan Ziporyn on the EWI (Electronic Wind Instrument), and Christine Southworth on the guitar and EBow (Electronic Bow)
Spider’s Canvas / Arachnodrone is inspired by the multifaceted work of artist Tomas Saraceno, specifically his work using multiple species of spiders to make sculptural webs. Different species make very different types of webs, ranging not just in size but in design and functionality. Tomas’ own web sculptures are in essence collaborations with the spiders themselves, placing them sequentially over time in the same space, so that the complex, 3-dimensional sculptural web that results is in fact built by several spiders, working together.
Meanwhile, back among the humans at MIT, Isabelle Su, a Course 1 doctoral student in civil engineering, has been focusing on analyzing the structure of single-species spider webs, specifically the ‘tent webs’ of the cyrtophora citricola, a tropical spider of particular interest to her, Tomas, and Professor Markus Buehler. Tomas gave the department a cyrtophora spider, the department gave the spider a space (a small terrarium without glass), and she in turn built a beautiful and complex web. Isabelle then scanned it in 3D and made a virtual model. At the suggestion of Evan Ziporyn and Eran Egozy, she then ported the model into Unity, a VR/game making program, where a ‘player’ can move through it in numerous ways. Evan & Christine Southworth then worked with her on ‘sonifying’ the web and turning it into an interactive virtual instrument, effectively turning the web into a 1700-string resonating instrument, based on the proportional length of each individual piece of silk and their proximity to one another. As we move through the web (currently just with a computer trackpad, but eventually in a VR environment), we create a ‘sonic biome’: complex ‘just intonation’ chords that come in and out of earshot according to which of her strings we are closest to. That part was all done in MAX/MSP, a very flexible high level audio programming environment, which was connected with the virtual environment in Unity. Our new colleague Ian Hattwick joined the team focusing on sound design and spatialization, building an interface that allowed him the sonically ‘sculpt’ the sculpture in real time, changing amplitude, resonance, and other factors. During this performance at Palais de Tokyo, Isabelle toured the web – that’s what the viewer sees – while Ian adjusted sounds, so in essence they were together “playing the web.” Isabelle provides a space (the virtual web) and a specific location within it (by driving through), which is what the viewer sees, from multiple angles, on the 3 scrims. The location has certain acoustic potentialities, and Ian occupies them sonically, just as a real human performer does in a real acoustic space. A rough analogy might be something like wandering through a gothic cathedral or a resonant cave, using your voice or an instrument at different volumes and on different pitches to find sonorous resonances, echoes, etc. Meanwhile, Evan and Christine are improvising with the web instrument, building on Ian’s sound, with Evan on EWI (Electronic Wind Instrument) and Christine on electric guitar with EBow.
For the visuals, Southworth wanted to create the illusion that the performers were actually inside the web. We built a structure covered in sharkstooth scrim, with 3 projectors projecting in and through from 3 sides. Southworth created images using her photographs of local Lexington, MA spider webs mixed with slides of the scan of the web at MIT, and then mixed those images with the projection of the game, creating an interactive replica of Saraceno’s multi-species webs.
If you listen to the press conference, you will hear Buehler talk about practical applications for this work in materials science.
Set against the backdrop of an ambiguous dystopia and eternal rave, LINK SICK is a tale about the threads that bind us together.
LINK SICK is DEBBY FRIDAY’S graduate thesis project – an audio-play written, directed and scored by the artist herself. The project is a science-fiction exploration of the connective tissue of human experience as well as an experiment in sound art; blurring the lines between theatre, radio, music, fiction, essay, and internet art. Over 42-minutes, listeners are invited to gather round, close their eyes, and open their ears; submerging straight into a strange future peppered with blink-streams, automated protests, disembodied DJs, dancefloor orgies, and only the trendiest S/S 221 G-E two-piece club skins.
DEBBY FRIDAY as Izzi/Narrator Chino Amobi as Philo Sam Rolfes as Dj GODLESS Hanna Sam as ABC Inc. Announcer Storm Greenwood as Diana Deviance Alex Zhang Hungtai as Weaver Allie Stephen as Numee Soukayna as Katz AI Voice Generated Protesters via Replica Studios
Presented in partial fulfillment of the requirements of the Degree of Master of Fine Arts in the School for the Contemporary Arts at Simon Fraser University.
No time is listed but I’m assuming FRIDAY is operating on PDT, so, you might want to take that into account when checking.
FRIDAY seems to favour full caps for her name and everywhere on her eponymous website (from her ABOUT page),
DEBBY FRIDAY is an experimentalist.
Born in Nigeria, raised in Montreal, and now based in Vancouver, DEBBY FRIDAY’s work spans the spectrum of the audio-visual, resisting categorizations of genre and artistic discipline. She is at once sound theorist and musician, performer and poet, filmmaker and PUNK GOD. …
Should you wish to support the artist financially, she offers merchandise.
Getting back to the play, I look forward to the auditory experience. Given how much we are expected to watch and the dominance of images, creating a piece that requires listening is an interesting choice.
What a great bit of work, publicity-wise, from either or both the Aga Khan Museum in Toronto (Canada) and artist/scientist Radha Chaddah. IAM (ee-yam): Dance of the Molecules, a virtual performance installation featuring COVID-19 and molecular dance, has been profiled in the Toronto Star, on the Canadian Broadcasting Corporation (CBC) website, and in the Globe and Mail within the last couple of weeks. From a Canadian perspective, that’s major coverage and much of it national.
Bruce DeMara’s March 11, 2021 article for the Toronto Star introduces artist/scientist Radha Chaddah, her COVID-19 dance of molecules, and her team (Note: A link has been removed),
Visual artist Radha Chaddah has always had an abiding interest in science. She has a degree in biology and has done graduate studies in stem cell research.
[…] four-act dance performance; the first part “IAM: Dance of the Molecules” premiered as a digital exhibition on the Aga Khan Museum’s website March 5  and runs for eight weeks. Subsequent acts — human, planetary and universal, all using the COVID virus as an entry point — will be unveiled over the coming months until the final instalment in December 2022.
Among Chaddah’s team were Allie Blumas and the Open Fortress dance collective — who perform as microscopic components of the virus’s proliferation, including “spike” proteins, A2 receptors and ribosomes — costumiers Call and Response (who designed for the late Prince), director of photography Henry Sansom and composer Dan Bédard (who wrote the film’s music after observing the dance rehearsals remotely).
A March 5, 2021 article by Leah Collins for CBC online offers more details (Note: Links have been removed),
This month, the Aga Khan Museum in Toronto is debuting new work from local artist Radha Chaddah. Called IAM, this digital exhibition is actually the first act in a series of four short films that she aims to produce between now and the end of 2022. It’s a “COVID story,” says Chaddah, but one that offers a perspective beyond the anniversary of its impact on life and culture and toilet-paper consumption. “I wanted to present a piece that makes people think about the coronavirus in a different way,” she explains, “one that pulls them out of the realm of fear and puts our imaginations into the realm of curiosity.”
It’s scientific curiosity that Chaddah’s talking about, and her own extra-curricular inquiries first sparked the series. For several years, Chaddah has produced work that splices art and science, a practice she began while doing grad studies in molecular neurobiology. “If I had to describe it simply, I would say that I make art about invisible realities, often using the tools of research science,” she says, and in January of last year, she was gripped by news of the novel coronavirus’ discovery.
“I started researching: reading research papers, looking into how it was that [the virus] actually affected the human body,” she says. “How does it get into the cells? What’s its replicative life cycle?” Chaddah wanted a closer look at the structure of the various molecules associated with the progression of COVID-19 in the body, and there is, it turns out, a trove of free material online. Using animated 3-D renderings (sourced from this digital database), Chaddah began reviewing the files: blowing them up with a video projector, and using the trees in her own backyard as “a kind of green, living stage.”
Part one of IAM (the film appearing on the Aga Khan’s website) is called “Dance of the Molecules.” Recorded on Chaddah’s property in September, it features two dancers: Allie Blumas (who choreographed the piece) and Lee Gelbloom. Their bodies, along with the leafy setting, serve as a screen for Chaddah’s projections: a swirl of firecracker colour and pattern, built from found digital models. Quite literally, the viewer is looking at an illustration of how the coronavirus infects the human body and then replicates. (The very first images, for example, are close-ups of the virus’ spiky surface, she explains.) And in tandem with this molecular drama, the dancers interpret the process.
Radha Chaddah is a Toronto based visual artist and scientist. Born in Owen Sound, Ontario she studied Film and Art History at Queen’s University (BAH), and Human Biology at the University of Toronto, where she received a Master of Science in Cell and Molecular Neurobiology.
Chaddah makes art about invisible realities like the cellular world, electromagnetism and wave form energy, using light as her primary medium. Her work examines the interconnected themes of knowledge, illusion, desire and the unseen world. In her studio she designs projected light installations for public exhibition. In the laboratory, she uses the tools of research science to grow and photograph cells using embedded fluorescent light-emitting molecules. Her cell photographs and light installations have been exhibited across Canada and her photographs have appeared in numerous publications. She has lectured on basic cell and stem cell biology for artists, art students and the public at OCADU [Ontario College of Art & Design University], the Ontario Science Centre, the University of Toronto and the Textile Museum of Canada.
Topic: An evening salon and reading of specially commissioned pieces of fiction on AI futures
Description: Artificial intelligence and data-driven technologies permeate all aspects of our lives. Their proliferation increasingly leads to encounters with ‘mutant algorithms’, ‘biased machine learning’, and ‘racist AIs’ that sometimes make familiar forms of near-future fiction pale in comparison.
In these examples, AI and machine learning tools inscribe a certain future based on predictions from past observations and they foreclose a multitude of other possible futures.
Faced with this potential to limit and constrain what might be, can fiction and narrative offer alternatives for how AI could and should be?
This evening salon will present near-future fiction pieces commissioned by the Ada Lovelace Institute’s JUST AI project to inspire and expand our thinking about our possible relationship to AI and data.
Join the event to listen to the first reading of two commissioned pieces and to discuss with the authors and invited experts.
Live (real-time) captioning will be provided for this event, if you have questions or request for access, please contact: firstname.lastname@example.org.
Chair: – Alison Powell, Associate Professor, London School of Economics
Speakers: – Adam Marek – writer of futuristic and fantastical short stories – Squirrel Nation – reimagining and designing how to live in a warming world – Tania Hershman – poet, writer, teacher and editor – Yasemin J. Erden, Assistant Professor in Philosophy, University of Twente
Time: Mar 3, 2021 06:30 PM – 8 PM [GMT]
This artwork accompanying the Almost future AI announcement reminds me of a circuit board. In any event, I found this image and a bit more information about the Just AI programme/network and about their event on this Almost future AI webpage,
The JUST AI (Joining Up Society and Technology in AI) programme is an independent network of researchers and practitioners, led by Dr Alison Powell from LSE [London School of Economics], supported by the UK’s Arts and Humanities Research Council (AHRC) and the Ada Lovelace Institute. The humanities-led network is committed to understanding the social and ethical value of data-driven technologies, artificial intelligence, and automated systems. The network will build on research in AI ethics, orienting it around practical issues of social justice, distribution, governance and design, and seek to inform the development of policy and practice.
We are using Zoom for virtual events open to more than 40 attendees. Although there are issues with Zoom’s privacy controls, when reviewing available solutions we found that there isn’t a perfect product and we have chosen Zoom for its usability and accessibility. Find out more here.
I’m glad to see they’ve taken privacy concerns seriously enough to explain why they’re using Zoom. I wish more organizations took the time to inform participants in virtual and online events which technology is being used and to include a reference to or comment on privacy issues.
it’s a relief to see this level of congruence between Just AI’s and the Ada Lovelace Institute’s stated principles and its preliminary actions.
Before moving onto the next item and due to a very confused approach to naming (Ada Lovelace Day being both a ‘day’ and an organization), it seems like a good idea to mention that the Ada Lovelace Institute is not associated with the Ada Lovelace Day organization as per the Ada Lovelace Institute’s About webpage,
The Ada Lovelace Institute was established by the Nuffield Foundation in early 2018, in collaboration with the Alan Turing Institute, the Royal Society, the British Academy, the Royal Statistical Society, the Wellcome Trust, Luminate, techUK and the Nuffield Council on Bioethics.
One more March 2021 event
Staying on the Ada Lovelace theme, there’s an event on March 8, 2021 International Women’s Day being hosted by the organization called Ada Lovelace Day (there’s more confusion to come). Here’s more about the upcoming March 2021 event from the 2021 International Women’s Day event webpage,
Monday 8 March 2021 [1900 GMT]
We are celebrating International Women’s Day with an hour long live-streamed panel discussion titled Comedy and Communication, looking at how we can all use comedy techniques in our STEM communications and teaching.
The Ada Lovelace Day organization is at findingada.com, which is also the name for one of the organization’s initiatives, the ‘Finding Ada Network’. I find the naming conventions confusing, especially since there is an Ada Lovelace Day celebrated internationally and hosted by this organization (whatever it’s called) each year. In 2021, Ada Lovelace Day will be celebrated on Tuesday, October 12.
Dancing robots usually perform to pop music but every once in a while, there’s a move toward classical music and ballet, e.g., my June 8, 2011 posting was titled, Robot swan dances to Tchaikovsky’s Swan Lake. Unlike the dancing robot in the picture above, that robot swan danced alone. (You can still see the robot’s Swan Lake performance in the video embedded in the 2011 posting.)
I don’t usually associate dance magazines with robots but Chava Pearl Lansky’s Nov. 18, 2020 article about dancer/physicist Merritt Moore and her work with Baryshnibot is found in ballet magazine, Pointe (Note: Links have been removed),
When the world went into lockdown last March , most dancers despaired. But not Merritt Moore. The Los Angeles native, who lives in London and has danced with Norwegian National Ballet, English National Ballet and Boston Ballet, holds a PhD in atomic and laser physics from the University of Oxford. A few weeks into the coronavirus pandemic, she came up with a solution for having to train and work alone: robots.
Moore had just come out of a six-week residency at Harvard ArtLab focused on the intersection between dance and robotics. “I knew I needed something to look forward to, and thought how bizarre I’d just been working with robots,” she says. “Who knew they’d be my only potential dance partners for a really long time?” She reached out to Universal Robotics and asked them to collaborate, and they agreed to send her a robot to experiment with.
Baryshnibot is an industrial robot normally used for automation and manufacturing. “It does not look impressive at all,” says Moore. “But there’s so much potential for different movement.” Creating dances for a robot, she says, is like an elaborate puzzle: “I have to figure out how to make this six-jointed rod emulate the dance moves of a head, two arms, a body and two legs.”
Moore started with the basics. She’d learn a simple TikTok dance, and then map the movements into a computer pad attached to the robot. “The 15-second-routine will take me five-hours-plus to program,” she says. Despite the arduous process, she’s built up to more advanced choreography, and is trying on different dance styles, from ballet to hip hop to salsa. For her newest pas de deux, titled Merritt + Robot, Moore worked with director Conor Gorman and cinematographer Howard Mills to beautifully capture her work with Baryshnibot on film. …