The rest of the press release (Note: A link has been removed),
The authors of the study explain that meditation allows researchers to train their body for data collection – improving their capacity to capture unexpected insights and deal with uncertainty and transformation as they incorporate novel interpretations into their research.
The skills enable researchers to understand novel cultural practices. Poetic meditations may prepare the researchers to see the world with different eyes.
Publishing their findings in Journal ofMarketing Management, researchers at the University of Birmingham and Kedge Business School, Bordeaux, France, outline a radical new process to help researchers to enhance their work.
Pilar Rojas-Gaviria, from the University of Birmingham, commented: “Scientific wonder prompts us to ask questions about the purpose of consumption, the way markets are created and extended, and how life and human experience are attached to both.
“Academics have always developed theses to resolve questions and explain events, but mindfulness practice can make our bodies an instrument of research – gathering data from different environmental sources. Poetry offers qualitative researchers a useful tool to refigure their surroundings and shed new light on the data they work with.”
Poetic meditation allows researchers to reveal unexpected or previously unnoticed features of market and consumption environments – rather than simply reproducing existing categories and theories.
By recording and presenting poetic meditations through audio media, the researchers demonstrate poetry’s potential to stimulate new ideas that can influence how academics approach data collection or analysis.
The researchers demonstrate the technique with two poetic meditations focusing on the colours green and red. These audio presentations settle the listener into a relaxed state, before taking the listener on an intellectual journey into poetry and philosophy, and ending with a period of meditation.
Robin Canniford, from Kedge Business School, commented: “We believe this technique can inspire researchers to include sound recordings and data presentations in their publications – creating a different approach to communicating and understandingtheir findings.
“Creating a poetic meditation might be a first step in a researcher’s journey that uncovers new sensations, interpretations, and questions – reaching towards unconventional and impactful responses in our research, even when answers seem to be far in the future.”
Poetry in marketing is already proven to be an effective research method to challenge conventional thinking in areas such as branding. It has helped marketers understand markets and consumers – engaging in conversations that capture how people consume products and services.
Here’s a link to and a citation for the paper (where you’ll find an audio file of this paper and supplementary audio files of poetry),
This paper is open access and because I quite like it, here’s the,
How does one use one’s body in qualitative research? Poetic meditation is a technique that offers to enhance researchers’ sensory capacities and embodied practices in research. By using mindfulness practice as a means to relax and focus on sensations, scholars can prepare to embody data collection so as to encounter multiple environmental features including, but not limited to the visual and textual. So too is poetic meditation intended as a tool to help researchers to encounter mysterious moments and to refigure their surroundings in ways that explicitly reframe sensemaking and representation. This companion essay to recorded poetic meditations encourages researchers to embrace mystery as a pathway to knowledge-making, and to build confidence to creatively step outside of common linguistic and theoretical modes.
Maxwell is James Clerk Maxwell, a Scottish mathematician and scientist, considered a genius for his work on electromagnetism. His ‘demon’ is a thought experiment that has influenced research for over 150 years as this November 29, 2022 news item on ScienceDaily makes clear,
A team of quantum engineers at UNSW [University of New South Wales] Sydney has developed a method to reset a quantum computer — that is, to prepare a quantum bit in the ‘0’ state — with very high confidence, as needed for reliable quantum computations. The method is surprisingly simple: it is related to the old concept of ‘Maxwell’s demon’, an omniscient being that can separate a gas into hot and cold by watching the speed of the individual molecules.
“Here we used a much more modern ‘demon’ – a fast digital voltmeter – to watch the temperature of an electron drawn at random from a warm pool of electrons. In doing so, we made it much colder than the pool it came from, and this corresponds to a high certainty of it being in the ‘0’ computational state,” says Professor Andrea Morello of UNSW, who led the team.
“Quantum computers are only useful if they can reach the final result with very low probability of errors. And one can have near-perfect quantum operations, but if the calculation started from the wrong code, the final result will be wrong too. Our digital ‘Maxwell’s demon’ gives us a 20x improvement in how accurately we can set the start of the computation.”
The research was published in Physical Review X, a journal published by the American Physical Society.
Watching an electron to make it colder
Prof. Morello’s team has pioneered the use of electron spins in silicon to encode and manipulate quantum information, and demonstrated record-high fidelity – that is, very low probability of errors – in performing quantum operations. The last remaining hurdle for efficient quantum computations with electrons was the fidelity of preparing the electron in a known state as the starting point of the calculation.
“The normal way to prepare the quantum state of an electron is go to extremely low temperatures, close to absolute zero, and hope that the electrons all relax to the low-energy ‘0’ state,” explains Dr Mark Johnson, the lead experimental author on the paper. “Unfortunately, even using the most powerful refrigerators, we still had a 20 per cent chance of preparing the electron in the ‘1’ state by mistake. That was not acceptable, we had to do better than that.”
Dr Johnson, a UNSW graduate in Electrical Engineering, decided to use a very fast digital measurement instrument to ‘watch’ the state of the electron, and use real-time decision-making processor within the instrument to decide whether to keep that electron and use it for further computations. The effect of this process was to reduce the probability of error from 20 per cent to 1 per cent.
A new spin on an old idea
“When we started writing up our results and thought about how best to explain them, we realized that what we had done was a modern twist on the old idea of the ‘Maxwell’s demon’,” Prof. Morello says.
The concept of ‘Maxwell’s demon’ dates back to 1867, when James Clerk Maxwell imagined a creature with the capacity to know the velocity of each individual molecule in a gas. He would take a box full of gas, with a dividing wall in the middle, and a door that can be opened and closed quickly. With his knowledge of each molecule’s speed, the demon can open the door to let the slow (cold) molecules pile up on one side, and the fast (hot) ones on the other.
“The demon was a thought experiment, to debate the possibility of violating the second law of thermodynamics, but of course no such demon ever existed,” Prof. Morello says.
“Now, using fast digital electronics, we have in some sense created one. We tasked him with the job of watching just one electron, and making sure it’s as cold as it can be. Here, ‘cold’ translates directly in it being in the ‘0’ state of the quantum computer we want to build and operate.”
The implications of this result are very important for the viability of quantum computers. Such a machine can be built with the ability to tolerate some errors, but only if they are sufficiently rare. The typical threshold for error tolerance is around 1 per cent. This applies to all errors, including preparation, operation, and readout of the final result.
This electronic version of a ‘Maxwell’s demon’ allowed the UNSW team to reduce the preparation errors twenty-fold, from 20 per cent to 1 per cent.
“Just by using a modern electronic instrument, with no additional complexity in the quantum hardware layer, we’ve been able to prepare our electron quantum bits within good enough accuracy to permit a reliable subsequent computation,” Dr Johnson says.
“This is an important result for the future of quantum computing. And it’s quite peculiar that it also represents the embodiment of an idea from 150 years ago!”
Hat’s off to whoever prepared the opening sequences for this informative and entertaining video from UNSW,
For years, James Clerk Maxwell’s role as a poet has fascinated me. Yes, a physicist who wrote poetry about physics and other matters as noted in my April 24, 2019 (The poetry of physics from Canada’s Perimeter Institute) where you’ll find poems by various physicists including the aforementioned Maxwell, as well as, a link to the original Perimeter Institute for Theoretical Physics (PI) posting featuring the excerpted poems even more physics poems.
There is a lot of anxiety about artificial intelligence in the arts, which can only be exacerbated by a question such as this, from a December 2, 2022 news item on ScienceDaily,
Can artificial intelligence write better poetry than humans?
The gap between human creativity and artificial intelligence seems to be narrowing. Previous studies have compared AI-generated versus human-written poems and whether people can distinguish between them.
Now, a study led by Yoshiyuki Ueda at Kyoto University Institute for the Future of Human and Society [Japan], has shown AI’s potential in creating literary art such as haiku — the shortest poetic form in the world — rivaling that of humans without human help.
Ueda’s team compared AI-generated haiku without human intervention, also known as human out of the loop, or HOTL, with a contrasting method known as human in the loop, or HITL.
The project involved 385 participants, each of whom evaluated 40 haiku poems — 20 each of HITL and HOTL — plus 40 composed entirely by professional haiku writers.
“It was interesting that the evaluators found it challenging to distinguish between the haiku penned by humans and those generated by AI,” remarks Ueda.
From the results, HITL haiku received the most praise for their poetic qualities, whereas HOTL and human-only verses had similar scores.
“In addition, a phenomenon called algorithm aversion was observed among our evaluators. They were supposed to be unbiased but instead became influenced by a kind of reverse psychology,” explains the author.
“In other words, they tended to unconsciously give lower scores to those they felt were AI-generated.”
Ueda points out that his research has put a spotlight on algorithm aversion as a new approach to AI art.
“Our results suggest that the ability of AI in the field of haiku creation has taken a leap forward, entering the realm of collaborating with humans to produce more creative works. Realizing the existence of algorithmic aversion will lead people to re-evaluate their appreciation of AI art.”
For those unfamiliar with Matsuo Bashō, he’s considered Japan’s most famous poet from the Edo period and Japan’s greatest master of haiku according to his Wikipedia entry. You can also find out more about Basho at the Poetry Foundation.
Is there some sort of misunderstanding between Toronto’s ArtSci Salon and the Onsite Gallery at OCAD (Ontario College of Art and Design) University)?
Previously, I featured a series organized around the ‘more-than’human’ exhibition at the Onsite Gallery which included events being held by the ArtSci Salon in my February 1, 2023 posting. This morning (April 3, 2023), I received, via email, an April 2, 2023 ArtSci Salon announcement about some upcoming events for the ‘Re-situating: more-than-human’ event series (sigh), Note 1: They’ve added a poet to the Poetry Night, added more detail to the May 2023 excursion, and added a call for projects; Note 2: The Onsite Gallery continues call to it the ‘more-than-human’ exhibition,
Poetry Night Wednesday, April 5  – 7:30-9:30
An immersive poetry performance that involves three poets reading poems and a site-specific live projection mapping response created by artist Ilze Briede (Kavi). Come to this in-depth and unique event of deep listening and embodied experience!
Dr. Madhur Anand, School of Environmental Sciences, University of Guelph Dr. Karen Houle, College of arts, University of Guelph
Liz Howard, Department of English, Concordia University
Projection mapping by Ilze Briede (Kavi) PhD student, York University
Don’t forget next event of the Re-Situating series:
The rare Charitable Research Reserve is an urban land trust and environmental institute in Waterloo Region/Wellington, protecting over 1,200 acres of highly sensitive lands.
This event follows and concludes several interdisciplinary dialogues on ethics of care, ecology, symbiosis, and human-plant relations. We weave together embodied discovery and sensory experience ; listening and thinking about the ethical and material implications of recognizing non-human individuals as valuable ; as well as different disciplines, epistemologies, positionalities. Our goal is to acquire better awareness of the ecological community to which we belong, with the intention of rethinking and resituating the human within a diverse, complex, and multifaceted ecosystem of other-than-human lifeforms.
The day will begin with a panel between artists and scientists investigating the social, economic, and natural complexities affecting both human and plant-life. The afternoon events will include a 30-minute walk through the wetlands at rare (wetland as carbon sinks), a Master Class led by Dr. Alice Jarry (the design of plant-based air filtration), and a rare led walk following the ecological lichen monitoring.
IMPORTANT! Bus will leave for rare at 9 am and will return to Toronto at 5:30 pm.
Please, note: Tickets are limited. Should you not be able to attend, please let us know so we can free up the space for someone else.
We require a nominal registration fee of $5 which will be refunded on the day of the event.
Sunday, May 7  – 9:00 am – 5:30 pm
Meeting place: Onsite gallery, 199 Richmond Street West.
11:00 am-12:30 pm: Panel
Sumia Ali, McMaster University
Grace Grothaus, PhD candidate, York University
Dr. Alice Jarry, Speculative Life BioLab, Concordia University
Dr. Marissa Davis, University of Waterloo
12:30-1:30pm: lunch – catered
1:30-2:15 Wetlands walk
Dr. Alice Jarry, Speculative Life BioLab, Concordia University – Plant based filtration systems
4:30-5:15 The lichen monitoring walk. This program at rare is one of several long-term ecological monitoring programs yielding valuable baseline data and can help to identify critical changes in ecosystem dynamics.
5:30 – return to Toronto
For more information and for media inquiries please contact Roberta Buiani – ArtSci Salon, The Fields Institute email@example.com Jane Tingley – Slolab, York University firstname.lastname@example.org
From the April 2, 2023 ArtSci Salon announcement (received via email),
call for GLAM Incubator projects for 2023 – 2024
For more information about the Call for Projects please visit the GLAM Incubator’s website For specific questions about the Incubator or this year’s call, please email me directly at email@example.com. I am happy to answer any questions you or any potential partner organizations might have.
The GLAM Incubator is a research and support hub that connects galleries, libraries, archives, and museums with industry partners, researchers, and students to advance the development of seedling projects that benefit cultural institutions, industry, and the research and teaching goals of universities worldwide. The overarching goal of the Incubator is to provide support to experimental projects that benefit the GLAM industries and engages students. It provides the broader context and overarching structure for an ongoing series of responsive, finite, cross-sector action research collaborations. A collaboration between the Faculty of Information and the Knowledge Media Design Institute at the University of Toronto, the GLAM Incubator provides space, administrative assistance, research expertise, equipment, event facilitation, limited funding, and knowledge mobilization.
Each year, the GLAM Incubator puts out a Call for Projects from GLAM institutions for small-scale projects that experiment or incubate new programming, service models, interactive experiences, technical services, knowledge media, and user interfaces that will have an impact on GLAM institutions or professions more broadly. Please consult our Call for Projects page for more information.
The GLAM Incubator is a theory and innovation lab dedicated to launching and supporting small-scale projects focused on the development of cutting-edge programming, service models, interactive experiences, knowledge media, and user interfaces that address a specific issue or opportunity associated with emerging technologies within the GLAM sector. The themes and contents of the projects supported by the Incubator evolve in keeping with shifting technological developments and in response to the fluctuating needs and concerns of the various stakeholders involved in the cultural industry sectors (professionals, patrons, funders).
The Incubator will use its resources and infrastructure to run multiple projects concurrently. In addition to meeting the above criteria, projects will demonstrate a capacity to engage a diversity of stakeholders, as well as new and existing community partners. Projects will be “small-scale,” with a well-bounded research design (e.g. a study addressing a specific issue or timed opportunity faced by a community or industry partner) and short-term (1-3 years) duration. Active projects will be set up in a “doored lab,” either dedicated or shared. The Incubator will purchase or assist with the purchase of technological equipment and software required for the research. It will assist with administrative tasks and knowledge mobilization activities, as described in more detail below. It will provide in-kind support to help project leads secure external grants to fund other costs associated with the research (including research assistant salaries, conference travel, etc.). In exchange, project leads and their teams will ensure that a significant proportion of the research activities occur within that space.
Project teams must participate in an annual Symposium and engage their research and results in Incubator-supported knowledge mobilization activities, public or community outreach activities, and student engagement opportunities, where applicable.
Incubator projects will be selected through an application process.
Two-page description of the project that includes an explanation of the project’s purpose and impact on a GLAM industry or profession;
Resume(s) of the lead applicant(s);
A list of collaborators including brief biographies or descriptive information
Sustainability for continuing the project after incubation
A list of potential equipment, space, administrative, and funding needs.
We also welcome inquiries from potential applicants.
Enjoy the events and good luck with your submission to GLAM.
Should you be interested in an ArtSci Salon event (and part of whatever this series and exhibition is being called), titled ‘On Ethics of Care’, held in late March 2023, there’s an embedded two hour video on their ‘Re-situating: more-than-human’ webpage,
Toronto’s Art/Sci Salon’s January 30, 2023 announcement (received via email) lists information for two organizations, the Onsite Gallery’s events and the Salon’s own events.
This gallery is located in Toronto, Ontario at 199 Richmond St. W. From the homepage, “It is the flagship professional gallery of OCAD [Ontario College of Art and Design] University and an experimental curatorial platform for art, design and new media.”
From the Onsite Gallery ‘more-than-human‘ event page. First, there’s the exhibition (Note 1: I found the gallery’s event page I’m using here more informative than the email announcement; Note 2: I have not included the images featuring the artists and their work),
February 01 to May 13, 2023
Curated by Jane Tingley
Core exhibition of the CONTACT Photography Festival
more-than-human presents media artworks at the intersection of art, science, Indigenous worldviews, and technology that speculatively and poetically use multimodal storytelling as a vehicle for interpreting, mattering, and embodying more-than-human ecologies. The artworks in this exhibition aim to critically and emotionally engage with the important work of decentering the human and rethinking the perspective that sees nature as a lifeless resource for exploitation. Many of the artworks use technological and scientific tools as entry points for witnessing and interacting with these more-than-human worlds, as they help visualize phenomena beyond human sensory perception while nevertheless situating us within them. Combined, the artworks in the show weave a story that tells a tale of symbiosis, intersections, and more-than-human relationality. They incorporate scientific, philosophical, and Indigenous perspectives to create an experiential tapestry that asks the viewer to reconsider, reorient, and rethink relationships with the more-than-human.
Jane Tingley is an artist, curator, Director of the SLOlab: Sympoietic Living Ontologies Lab and Associate Professor at York University. Her studio work combines traditional studio practice with new media tools – and spans responsive/interactive installation, performative robotics, and telematically connected distributed sculptures/installations. Her works is interdisciplinary in nature and explores the creation of spaces and experiences that push the boundaries between science and magic, interactivity, and playfulness, and offer an experience to the viewer that is accessible both intellectually and technologically. Using distributed technologies, her current work investigates the hidden complexity found in the natural world and explores the deep interconnections between the human and non-human relationships. As a curator her interests lie at the intersection art, science, and technology with a special interest in collaborative creativity as impetus for innovation and discovery. Recent exhibitions include Hedonistika (2014) at the Musée d’art contemporain (Mtl, CA), INTERACTION (2016) and Agentsfor Change (2020) at THE MUSEUM (Kitchener, CA). As an artist she has participated in exhibitions and festivals in the Americas, the Middle East, Asia, and Europe – including translife -International Triennial of Media Art at the National Art Museum of China, Beijing, Elektra Festival in Montréal (CA) and the Künstlerhause in Vienna (AT). She received the Kenneth Finkelstein Prize in Sculpture (CA), the first prize in the iNTERFACES – Interactive Art Competition (PT).
Ursula Biemann is an artist, author and video essayist. Her artistic practice is research oriented and involves fieldwork from Greenland to Amazonia, where she investigates climate change and the ecologies of oil, ice, forests and water. In her multi-layered videos, she interweaves vast cinematic landscapes with documentary footage, science fiction poetry and academic findings to narrate a changing planetary reality. In 2018, Biemann was commissioned by Museo de Arte, Universidad Nacional de Colombia in the co-creation of a new Indigenous University in the South of Colombia led by the Inga people in which she contributes the online platform Devenir Universidad. Her recent video installation Forest Mind (2021) emerges from this long-term collaboration. She has published numerous books, including Forest Mind (2022) and the audiovisual online monograph Becoming Earth on her ecological video works between 2011-2021. Biemann has exhibited internationally with recent solo exhibitions at MAMAC, Nice and the Centre culturel suisse, Paris. She is appointed Doctor honoris causa in Humanities by the Swedish University Umea, and has received the 2009 Prix Meret Oppenheim, the Swiss Grand Award for Art, and the 2022 Zurich Art Award.
Lindsey french (she/they) is a settler artist, educator and writer whose work engages in multi- sensory signaling within ecological and technological systems. She has exhibited widely including at the Museum of Contemporary Art (Chicago), the International Museum of Surgical Science (Chicago), Pratt Manhattan Gallery (New York), the Miller Gallery for Contemporary Art (Pittsburgh), and SixtyEight Art Institute (Copenhagen). Recent publications include chapters for Ambiguous Territory: Architecture, Landscape, and the Postnatural (Actar, 2022), Olfactory Art and The Political in an Age of Resistance (Routledge, 2021), Why Look at Plants (Brill, 2019), and poetry for the journal Forty-Five. They earned an interdisciplinary BA in Environment, Interaction, and Design (Hampshire College), and an MFA in Art and Technology Studies (School of the Art Institute of Chicago). Newly based in the prairie landscape of Treaty 4 territory in Regina, Saskatchewan, french teaches as an Assistant Professor in Creative Technologies in the Faculty of Media, Art, and Performance at the University of Regina.
Grace Grothaus Is a computational media artist whose research explores ecosystemic human and plant relationships in relation to the present global climate crisis and speculative futures. She is interested in art’s potential to foster empathy with more-than-human worlds. Frequently collaborative, Grace works with scientists, engineers, musicians and other visual and performing artists. Her research-creation is expressed as physical computing installations which take place both outdoors or in the gallery and often center around the sensing and visualization of invisible environmental phenomena. Her artworks have been exhibited widely including at the International Symposium of Electronic Art (Barcelona, ES & Durban, SA), Environmental Crisis: Art & Science (London, UK), Cité Internationale des Arts (Paris, FR), and the World Creativity Biennale (Rio de Janiero, BR). Grothaus has received numerous awards including from the United States National Foundation for Advancement in the Arts. Currently she is working towards a PhD in Digital Media from York University where she has been named a VISTA scholar and a Graduate Fellow of Academic Distinction.
Dolleen Tisawii’ashii Manning is an interdisciplinary artist and Queen’s National Scholar in Anishinaabe Language, Knowledge, and Culture (ALKC) in the Department of Philosophy and Cultural Studies at Queen’s University. Manning has expertise in Anishinaabe ontology, mnidoo interrelationality, phenomenology, and art. A member of Kettle and Stoney Point First Nation, her primary philosophical influence and source of creativity is her early childhood grounding in Anishinaabe onto- epistemology. She is Principal Investigator of Earthdiver: Land-Based Worlding (MITACS), and Co-Investigator on Pluriversal Worlding with Extended Reality. Manning co-directs the cross- institutional Peripheral Visions Co-Lab (York and Queen’s). She is an affiliate of Revision Centre for Art and Social Justice, and Fellow of The International Institute for Critical Studies in Improvisation (IICSI).
Mary Bunch is a media artist, Canada Research Chair, and Associate Professor, Cinema and Media Arts at York University. Through theoretical inquiry and collaborative research creation, Bunch mobilizes queer, feminist, disability and decolonial frameworks to better understand peripheral worldmaking imaginaries in media arts and intermedial performance. She is co-editor of a special issue on Access Aesthetics in Public, Principal Investigator on the research creation project Pluriversal Worlding with Extended Reality (SSHRC Insight) and co-investigator on Earthdiver: Land- Based Worlding (MITACS). Dr Bunch is co-director of the Peripheral Visions Co- Lab, Executive Committee member of Sensorium: Centre for Digital Arts and Technology, a core member of Vision: Science to Applications (VISTA), a Fellow at the Bonham Centre for Sexual Diversity Studies, and an Affiliate of Revision Centre for Art and Social Justice.
Suzanne Morrissette (she/her) (she/her) is an artist, curator, and scholar who is currently based out of Toronto. Her father’s parents were Michif- and Cree-speaking Metis with family histories tied to the Interlake and Red River regions and Scrip in the area now known as Manitoba. Her mother’s parents came from Canadian-born farming families descended from United Empire loyalists and Mennonites from Russia. Morrissette was born and raised in Winnipeg and is a citizen of the Manitoba Metis Federation. As an artistic researcher Suzanne’s interests include: family and community knowledge, methods of translation, the telling of in-between histories, and practices of making that support and sustain life. Her two recent solo exhibitions, What does good work look like? and translations recently opened in Toronto (Gallery 44) and Montreal (daphne art centre) respectively. Her work has appeared in numerous group exhibitions such as Lii Zoot Tayr (Other Worlds), an exhibition of Metis artists working with concepts of the unknowable, and the group exhibition of audio-based work about waterways called FLOW with imagineNATIVE Film + Media Art Festival. Morrissette holds a PhD from York University in Social and Political Thought. She currently holds the position of Assistant Professor and Graduate Program Director for the Criticism and Curatorial Practices and Contemporary Art, Design, and New Media Histories Masters programs at OCAD University.
Joel Ong (PhD, MSc.Bioart) is a media artist whose works connect scientific and artistic approaches to the environment, developed from more than a decade of explorations in sound, installation and socially conscious art. His conceptual explorations revolve around metaphors of distance, connectivity, assiduously reworking this notion of the ‘environment’ – how different tools and scales of observation reveal diverse biotic and abiotic relationalities, and how these continually oscillate between natural and computational worlds. His works have been shown at internationally at the Currents New Media Festival, Nuit Blanche Toronto, Seattle Art Museum, the Gregg Museum of Art and Design, the Penny Stamps Gallery and the Ontario Science Centre etc. Joel is Associate Professor in Computational Arts and Director of Sensorium:The Centre for Digital Arts and Technology at York University, in Toronto, Canada. His research has been funded by such as SSHRC, eCampus Ontario, Women and Gender Equality Canada.
Rasa Smite and Raitis Smits are Riga and Karlsruhe based artists and co-founders of RIXC Center for New Media Culture in Riga [Latvia], co-curators of RIXC Art and Science Festival, chief-editors of Acoustic Space, as well as co-chairs of recently founded NAIA – Naturally Artificial Intelligence Art association in Karlsruhe, Germany. Together they create visionary and networked artworks – from pioneering internet radio experiments in 1990s, to artistic investigations in electromagnetic spectrum and collaborations with radio astronomers, and more recent “techno-ecological” explorations. Their projects have been nominated (Purvitis Prize 2019, 2021, International Public Arts Award – Euroasia region 2021), awarded (Ars Electronica 1998, Falling Walls – Science Breakthrough 2021) and shown widely including at the Venice Architecture Biennale, Latvian National Museum of Arts, House of Electronic Arts in Basel, Ars Electronica Festival in Linz, and other venues, exhibitions and festivals in Europe, US, Canada and Asia. More recently they both also have been lecturers in MIT ACT – Art Culture Technology program (2018-2021), Boston.
Rasa Smite holds a PhD in sociology of media and culture; her thesis Creative Networks. In the Rear-View Mirror of Eastern European History (11) has been published by The Amsterdam Institute for Network Cultures. Currently she is a Professor of New Media Art at Liepaja University, and Senior Researcher at FHNW Academy of Art and Design in Basel, Switzerland.
Raitis Smits holds his doctoral degree in arts, and he is a Professor at the Art Academy of Latvia. In 2017 Raitis was a Fulbright Researcher in the Graduate Center of NYC.
Artists Rasa Smite & Raitis Smits, Grace Grothaus, Suzanne Morrissette and Lindsey french introduce their works exhibited in more-than-human and engage in a discussion about their practice. Moderated by Jane Tingley.
Forest Mind (31 minutes) tackles the underlying concepts that distinguish the Indigenous knowledge systems from that of modern science, gauging the limits of rationalism which has dominated Western thinking for the last 200 years.
Artists Joel Ong, Jane Tingley, Dolleen Tisawii’ashii Manning and Mary Bunch introduce their artworks their works exhibited in more-than-human and engage in a discussion about their practice. Moderated by Lisa Deanne Smith.
I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:
Sounds of science
It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.
This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.
Aliens on earth: machinic biology and/or biological machinery?
When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.
However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):
US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting
I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.
Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.
On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,
“How AI-designed fiction reading lists and self-publishing help nurture far-right and neo-Nazi novelists” December 6, 2022 posting
While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,
“Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
“Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
“AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting
Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)
There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,
Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.
Opposite world (quantum physics in Canada)
Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:
“Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
“Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].
BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.
Ukraine, science, war, and unintended consequences
Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]
Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.
In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.
First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.
The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.
Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.
Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022
Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,
“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)
Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.
The project is led by Indigenous scholars and activists …
Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.
There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,
“Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting
The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,
Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.
… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:
A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.
Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.
If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,
“Inherited nanobionics” makes its debut
The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.
Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.
Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.
Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.
For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).
I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?
My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”
He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.
Que sera, sera (Whatever will be, will be)
I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.
I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.
I’ve already featured these 2023 science events but just in case you missed them,
2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting
Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.
Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.
Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.
First, the ‘art’,
Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),
The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.
Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”
The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]
Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),
Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.
In August , Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.
Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.
The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.
The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.
This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 , along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.
“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.
“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”
Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.
“It’s not like you’re just smashing words together and winning competitions,” he said.
You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.
First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.
Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),
A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.
Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July .
It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.
Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),
Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.
Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …
The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.
… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.
A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.
Filthy lucre becomes more prominent in the conversation
Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,
I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?
You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.
Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?
When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.
Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.
You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.
And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.
I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?
I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.
This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?
Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,
Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”
We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.
Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.
More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”
This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.
Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.
This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.
High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …
Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”
User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”
The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),
I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible.
As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.
If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.
As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.
I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.
The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.
So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),
Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists
An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”
Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.
If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.
As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.
Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?
Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.
Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),
Mario Klingemann is a pioneering figurehead in the field of AI art, working deep in the field of Machine Learning. Governed by a community of 5,000 people, Klingemann developed Botto around an idea of creating an autonomous entity that is able to be creative and co-creative. Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI entity that is guided by an international community and art historical trends. Botto creates 350 art pieces per week that are presented to its community. Members of the community give feedback on these art fragments by voting, expressing their individual preferences on what is aesthetically pleasing to them. Then collectively the votes are used as feedback for Botto’s generative algorithm, dictating what direction Botto should take in its next series of art pieces.
The creative capacity of its algorithm is far beyond the capacities of an individual to combine and find relationships within all the information available to the AI. Botto faces similar issues as a human artist, and it is programmed to self-reflect and ask, “I’ve created this type of work before. What can I show them that’s different this week?”
Once a week, Botto auctions the art fragment with the most votes on SuperRare. All proceeds from the auction go back to the community. The AI artist auctioned its first three pieces, Asymmetrical Liberation, Scene Precede, and Trickery Contagion for more than $900,000 dollars, the most successful AI artist premiere. Today, Botto has produced upwards of 22 artworks and current sales have generated over $2 million in total [emphasis mine].
From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.
This exhibition (October 26 – 30, 2022) is being held in London, England at this location:
The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom
In addition to searching for the meaning of poems, they can also often be described through the emotions that the reader feels while reading them. Kristiine Kikas, a doctoral student at the School of Humanities of Tallinn University, studied which other sensations arise whilst reading poetry and how they affect the understanding of poems.
The aim of the doctoral thesis was to study the palpability of language [emphasis mine], i.e. sensory saturation, which has not found sufficient analysis and application so far. “In my research, I see reading as an impersonal process, meaning the sensations that arise do not seem to belong to either the reader or the poetry, but to both at the same time,” Kikas describes the perspective of her thesis.
In general, the language of poetry is studied metaphorically, in order to try to understand what a word means either directly or figuratively. A different perspective called “affective perspective” usually studies the effects of pre-linguistic impulses or impulses not related to the meaning of the word on the reader. However, Kikas viewed language as a simultaneous proposition and flow of consciousness, i.e. a discussion moving from one statement to another as well as connections that seem to occur intuitively while reading. She sought to identify ways to approach verbal language, that is considered to trigger analytical thinking in particular, in a way that would help open up sensory saturation and put their observation in poetic analysis at the forefront along with other modes of studying poetry. To achieve her goals, Kikas applied Gilles Deleuze’s method of radical empiricism and compared several other approaches with it: semiotics, biology, anthropology, modern psychoanalysis and cognitive sciences. [emphases mine]
Kikas describes reading in her doctoral thesis as a constant presence in verbal language, which is sometimes more and sometimes less pronounced. This type of presence can be felt like colour, posture or birdsong [emphasis mine]. “Following the neuroscientific origins of metaphors, I used the human organism’s tendency to perceive language at the sensory-motor level in my close reading to help replay it using body memory. This trait allows us to physically experience the words we read,” explains Kikas. According to her, the sensations stored in the body evoked by words can be considered the oneness of the reader and the words, or the reader’s becoming the words. Kikas emphasises that this can only happen if the multiplicity of sensations and meanings that arise during reading are recognised.
“Although the study showed that the saturations associated with verbal language cannot be linked to a broader literary discourse without representational and analytical thinking, the conclusion is that noticing and acknowledging them is important in both experiencing and interpreting the poem,” summarises Kikas her doctoral thesis. As her research was only the first attempt in examining sensations in poetry, Kikas hopes to provide material for further discussion. Above all, she encourages readers in their attempts to understand poetry to notice and trust even the slightest sensations and impulses triggered while reading, as these are the beginning of even the most abstract meaning.
This is a somewhat older thesis and is only loosely related in that it is about literary matters and there’s a science aspect to it too. Tania Hershman, “poet, writer, teacher and editor based in Manchester, UK,” adds this from the about page on her eponymous website, Note: I have moved the paragraphs into a different order,
… After making a living for 13 years as a science journalist, writing for publications such as WIRED and NewScientist, I gave it all up to write fiction, later also poetry and hybrid pieces, and am now based in Manchester in the north of England. I have a first degree in Maths and Physics, a diploma in journalism, an MSc in Philosophy of Science, an MA and a PhD in Creative Writing.
My hybrid book, And What If We Were All Allowed to Disappear, was published in a limited edition by Guillemot Press in March 2020. It is now sold out but can be read in electronic form as part of my PhD in Creative Writing, ‘Particle fictions: an experimental approach to creative writing and reading informed by particle physics’, available to be downloaded from Bath Spa University here: http://researchspace.bathspa.ac.uk/10693/.
This two-part document comprises the work submitted for Tania Hershman’s practice-based PhD in Creative Writing in answer to her primary research question: Can particle fiction and particle physics interrogate each other? Her secondary research question examined the larger question of wholeness and wholes versus parts. The first of the two elements of the PhD is a book-length creative work of what Hershman has defined as “particle fiction” – a book made of parts which works as a whole – entitled ‘And What If We Were All Allowed to Disappear’: an experimental, hybrid work comprised of prose, poetry, elements that morph between the two forms, and images, and takes concepts from particle physics as inspiration. The second element of this PhD, the contextualising research, entitled ‘And What If We Were All Allowed To Separate And Come Together’, which is written in the style of fictocriticism, provides an overview of particle physics and the many other topics relating to wholeness and wholes versus parts – from philosophy to postmodernism and archaeology – that Hershman investigated in the course of her project. This essay also details the “experiments” Hershman carried out on works which she defined as particle fictions, in order to examine whether it was possible to generalise and formulate a “Standard Model of Particle Fiction” inspired by a the Standard Model of Particle Physics, and to inform the creation of her own work of particle fiction.
Artists’ Talk & Webcast The Canadian Music Centre, 20 St. Joseph Street Toronto Thursday, July 7 7:30 – 9 p.m. [ET] (doors open 7 pm)
These are a Few of Our Favourite Bees investigates wild, native bees and their ecology through playful dioramas, video, audio, relief print and poetry. Inspired by lambe lambe – South American miniature puppet stages for a single viewer – four distinct dioramas convey surreal yet enlightening worlds where bees lounge in cozy environs, animals watch educational films [emphasis mine] and ethereal sounds animate bowls of berries (having been pollinated by their diverse bee visitors). Displays reminiscent of natural history museums invite close inspection, revealing minutiae of these tiny, diverse animals, our native bees. From thumb-sized to extremely tiny, fuzzy to hairless, black, yellow, red or emerald green, each native bee tells a story while her actions create the fruits of pollination, reflecting the perpetual dance of animals, plants and planet. With a special appearance by Toronto’s official bee, the jewelled green sweat bee, Agapostemon virescens!
These are a Few of Our Favourite Bees Collective are: Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey
These are a Few of Our Favourite Bees
Sarah Peebles, Ele Willoughby, Rob Cruickshank & Stephen Humphrey
paper, relief print, video projection, audio, audio cable, mixed media
Bee specimens & bee barcodes generously provided by Laurence Packer – Packer Lab, York University; Scott MacIvor – BUGS Lab, U-T [University of Toronto] Scarborough; Sam Droege – USGS [US Geological Survey]; Barcode of Life Data Systems; Antonia Guidotti, Department of Natural History, Royal Ontario Museum
In addition to watching television, animals have been known to interact with touchscreen computers as mentioned in my June 24, 2016 posting, “Animal technology: a touchscreen for your dog, sonar lunch orders for dolphins, and more.”
In May, my crabapple tree blooms. In August, I pick the ripe crabapples. In September, I make jelly. Then I have breakfast. This would not be without a bee.
It could not be without a bee. The fruit and vegetables I enjoy eating, as well as the roses I admire as centrepieces, all depend on pollination.
Our native pollinators and their habitat are threatened. Insect populations are declining due to habitat loss, pesticide use, disease and climate change. 75% of flowering plants rely on pollinators to set seed and we humans get one-third of our food from flowering plants.
I invite you to enter this beautiful dining room and consider the importance of pollinators to the enjoyment of your next meal.
Tracey Lawko employs contemporary textile techniques to showcase changes in our environment. Building on a base of traditional hand-embroidery, free-motion longarm stitching and a love of drawing, her representational work is detailed and “drawn with thread”. Her nature studies draw attention to our native pollinators as she observes them around her studio in the Niagara Escarpment. Many are stitched using a centuries-old, three-dimensional technique called “Stumpwork”.
Tracey’s extensive exhibition history includes solo exhibitions at leading commercial galleries and public museums. Her work has been selected for major North American and International exhibitions, including the Concours International des Mini-Textiles, Musée Jean Lurçat, France, and is held in the permanent collection of the US National Quilt Museum and in private collections in North America and Europe.
Toronto’s ArtSci Salon has been hosting a series of events and exhibitions about COVID-19 and other health care issues under the “Who Cares?” banner. The exhibitions and events are now coming to an end (see my February 9, 2022 posting for a full listing).
A March 29, 2022 Art/Sci Salon announcement (received via email) heralds the last roundtable event (see my March 7, 2022 posting for more about the Who Cares? roundtables), Note: This is an online event,
Bayo Akomolafe Seema Yasmin
Of Health Myths and Trickster Viruses
Friday, April 1 , 5:00-7:00 pm [ET]
Des mythes sur la santé et des virus trompeurs
Le Vendredi 1 avril , de 17H à 19H A conversation on the unsettling dimensions of epidemics and the complexities of responses to their challenges. ~ Une conversation sur les dimensions troublantes des épidémies et la complexité des réponses à leurs défis.
Seema Yasmin, Director of Research and Education, Stanford Health Communication Initiative. She is an Emmy Award-winning journalist, Pulitzer prize finalist, medical doctor and Stanford and UCLA professor.
Bayo Akomolafe Chief Curator, The Emergence Network. He is a widely celebrated international speaker, posthumanist thinker, poet, teacher, public intellectual, essayist, and author ~
Seema Yasmin, Director of Research and Education, Stanford Health Communication Initiative. Elle est une journaliste lauréate d’un Emmy Award, finaliste du prix Pulitzer, médecin et professeure à Stanford et UCLA.
Bayo Akomolafe, Chief Curator, The Emergence Network. Il est un conférencier international très célèbre, un penseur posthumaniste, un poète, un enseignant, un intellectuel public, un essayiste et un auteur.
There are the acknowledgements,
“Who Cares?” is a Speaker Series dedicated to fostering transdisciplinary conversations between doctors, writers, artists, and researchers on contemporary biopolitics of care and the urgent need to move towards more respectful, creative, and inclusive social practices of care in the wake of the systemic cracks made obvious by the pandemic.
We wish to thank/ nous the generous support of the Social Science and Humanities Research Council of Canada, New College at the University of Toronto and The Faculty of Liberal Arts and Professional Studies at York University; the Centre for Feminist Research, Sensorium Centre for Digital Arts and Technology, The Canadian Language Museum, the Departments of English and the School of Gender and Women’s Studies at York University; the D.G. Ivey Library and the Institute for the History and Philosophy of Science and Technology at the University of Toronto; We also wish to thank the support of The Fields Institute for Research in Mathematical Sciences
This series is co-produced in collaboration with the ArtSci Salon
The Who Cares? series webpage, found here, lists the exhibitions and final events,