Tag Archives: Dorothy Woodend

Poinsettia frogs and a Merry 2023 Christmas

I stumbled across this image in a December 20, 2023 article by Dorothy Woodend for The Tyee where she is the culture editor,

Instead of new material goods this holiday season, I’m searching for something more elusive and ultimately sustaining. And it may help us grow our appreciation for the natural world and its mysteries. Illustrations for The Tyee by Dorothy Woodend.

À propos given the name for this blog and the time of year. Thank you, Ms. Woodend!

I try not to do too many of these stories since the focus for this blog is new and emerging science and technology but I can’t resist including these frog stories (and one dog story). Plus, there may be some tap dancing.

A new (!) fanged frog in Indonesia

This is not the tiny Indonesian fanged frog but it does show you what a fanged frog looks like, from the December 21, 2023 “What Are Fanged Frogs?” posting on the Vajiram and Ravi IAS Study Center website,

Not an Indonesian fanged frog. h/t Vajiram and Ravi IAS Study Center [downloaded from https://vajiramias.com/current-affairs/what-are-fanged-frogs/658416a9f0e178517404afda/]

If you don’t have much time and are interested in the latest fanged frog, check out the December 21, 2023 “What Are Fanged Frogs?” posting as they have relevant information in bullet point form.

On to the specifics about the ‘new’ fanged frog from a December 21, 2023 news item on ScienceDaily,

In general, frogs’ teeth aren’t anything to write home about — they look like pointy little pinpricks lining the upper jaw. But one group of stream-dwelling frogs in Southeast Asia has a strange adaptation: two bony “fangs” jutting out of their lower jawbone. They use these fangs to battle with each other over territory and mates, and sometimes even to hunt tough-shelled prey like giant centipedes and crabs. In a new study, published in the journal PLOS [Public Library of Science] ONE, researchers have described a new species of fanged frog: the smallest one ever discovered.

“This new species is tiny compared to other fanged frogs on the island where it was found, about the size of a quarter,” says Jeff Frederick, a postdoctoral researcher at the Field Museum in Chicago and the study’s lead author, who conducted the research as a doctoral candidate at the University of California, Berkeley.

A December 20, 2023 Field Museum news release (also on EurrekAlert), which originated the news item, adds more detail,

“Many frogs in this genus are giant, weighing up to two pounds. At the large end, this new species weighs about the same as a dime.”

In collaboration with the Bogor Zoology Museum, a team from the McGuire Lab at Berkeley   found the frogs on Sulawesi, a rugged, mountainous island that makes up part of Indonesia. “It’s a giant island with a vast network of mountains, volcanoes, lowland rainforest, and cloud forests up in the mountains. The presence of all these different habitats mean that the magnitude of biodiversity across many plants and animals we find there is unreal – rivaling places like the Amazon,” says Frederick.

While trekking through the jungle, members of the joint US-Indonesia amphibian and reptile research team noticed something unexpected on the leaves of tree saplings and moss-covered boulders: nests of frog eggs.

Frogs are amphibians, and they lay eggs that are encapsulated by jelly, rather than a hard, protective shell. To keep their eggs from drying out, most amphibians lay their eggs in water. To the research team’s surprise, they kept spotting the terrestrial egg masses on leaves and mossy boulders several feet above the ground. Shortly after, they began to see the small, brown frogs themselves.

“Normally when we’re looking for frogs, we’re scanning the margins of stream banks or wading through streams to spot them directly in the water,” Frederick says. “After repeatedly monitoring the nests though, the team started to find attending frogs sitting on leaves hugging their little nests.” This close contact with their eggs allows the frog parents to coat the eggs with compounds that keep them moist and free from bacterial and fungal contamination.

Closer examination of the amphibian parents revealed not only that they were tiny members of the fanged frog family, complete with barely-visible fangs, but that the frogs caring for the clutches of eggs were all male. “Male egg guarding behavior isn’t totally unknown across all frogs, but it’s rather uncommon,” says Frederick.

Frederick and his colleagues hypothesize that the frogs’ unusual reproductive behaviors might also relate to their smaller-than-usual fangs. Some of the frogs’ relatives have bigger fangs, which help them ward off competition for spots along the river to lay their eggs in the water. Since these frogs evolved a way to lay their eggs away from the water, they may have lost the need for such big imposing fangs. (The scientific name for the new species is Limnonectes phyllofolia; phyllofolia means “leaf-nester.”)

“It’s fascinating that on every subsequent expedition to Sulawesi, we’re still discovering new and diverse reproductive modes,” says Frederick. “Our findings also underscore the importance of conserving these very special tropical habitats. Most of the animals that live in places like Sulawesi are quite unique, and habitat destruction is an ever-looming conservation issue for preserving the hyper-diversity of species we find there. Learning about animals like these frogs that are found nowhere else on Earth helps make the case for protecting these valuable ecosystems.”

Here’s a link to and a citation for the paper,

A new species of terrestrially-nesting fanged frog (Anura: Dicroglossidae) from Sulawesi Island, Indonesia by Jeffrey H. Frederick, Djoko T. Iskanda, Awal Riyanto, Amir Hamidy, Sean B. Reilly, Alexander L. Stubbs, Luke M. Bloch, Bryan Bach, Jimmy A. McGuire. PLOS ONE 18(12): e0292598 DOI: https://doi.org/10.1371/journal.pone.0292598 Published: December 20, 2023

This paper is open access and online only.

Fatal attraction to … frog noses?

Bob Yirka in a November 28, 2023 article published on phys.org describes research into some unusual mosquito behaviour, Note: Links have been removed,

A pair of environmental and life scientists, one with the University of Newcastle, in Australia, the other the German Center for Integrative Biodiversity Research, has found that one species of mosquito native to Australia targets only the noses of frogs for feeding. In their paper published in the journal Ethology, John Gould and Jose Valdez describe their three-year study of frogs and Mimomyia elegans, a species of mosquito native to Australia

As part of their study of frogs living in a pond on Kooragang Island, the pair took a lot of photographs of the amphibians in their native environment. It was upon returning to their lab and laying out the photographs that they noticed something unique—any mosquito feeding on a frog’s blood was always atop its nose. This spot, they noted, seemed precarious, as mosquitos are part of the frog diet.

A mosquito perches on the nose of a green and yellow frog perched on a branch.
A species of Australian mosquito, Mimomyia elegans, appears to have a predilection for the nostrils of tree frogs, according to new observations published in the journal Ethology. (John Gould) [downloaded from https://www.cbc.ca/radio/asithappens/mosquitoes-on-frog-noses-1.7058168]

Sheena Goodyear posted a December 13, 2023 article containing an embedded Canadian Broadcasting Corporation (CBC) As It Happens radio programme audio file of an interview with researcher John Gould, Note: A link has been removed,

So why risk landing on the nose of something that wants to eat you, when there are so many other targets walking around full of delicious blood?

“In all of the occasions that we observed, it seems as if the frog didn’t realize that it had a mosquito on top of it…. They were actually quite happy, just sitting idly, while these mosquitoes were feeding on them,” Gould said.

“So it might be that the area between the eyes is a bit of a blind spot for the frogs.”

It’s also something of a sneak attack by the mosquitoes.

“Some of the mosquitoes first initially landed on the backs of the frogs,” Gould said. “They might avoid being eaten by the frogs by landing away from the head and then walking up to the nostrils to feed.

It’s a plausible theory, says amphibian expert Lea Randall, a Calgary Zoo and Wilder Institute ecologist who wasn’t involved in the research. 

“Frogs have amazing vision, and any mosquito that approached from the front would likely end up as a tasty snack for a frog,” she said.

“Landing on the back and making your way undetected to the nostrils is a good strategy.”

And the reward may just be worth the risk. 

“I could also see the nostrils as being a good place to feed as the skin is very thin and highly vascularized, and thus provides a ready source of blood for a hungry mosquito,” Randall said.

Gould admits his friends and loved ones have likely grown weary of hearing him “talking about frogs and nostrils.” But for him, it’s more than a highly specific scientific obsession; it’s about protecting frogs.

His earlier research has suggested that mosquitoes may be a vector for transmitting amphibian chytrid fungus, which is responsible for declines in frog populations worldwide. 

That’s why he had been amassing photos of frogs and mosquitoes in the first place.

“Now that we know where the mosquito is more likely to land, it might give us a better impression about how the infection spreads along the skin of the frog,” he said.

But more work needs to be done. His frog nostril research, while it encompasses three years’ of fieldwork, is a natural history observation, not a laboratory study with controlled variables.

“It would be quite interesting to know whether this particular type of mosquito is transferring the chytrid fungus, and also how the fungus spreads once the mosquito has landed,” Gould said.

A man in a bright yellow jacket and a light strapped to his forehead poses outside at night with a tiny frog perched on his hand.
Gould describes himself as a ‘vampire scientist’ who stays up all night studying nocturnal tree frogs in Australia. ‘They’re so soft and timid a lot of the times,’ he said. ‘They’re quite a special little, little animal.’ (Submitted by John Gould)

Vampire scientist, eh? You can find the embedded 6 mins. 28 secs. audio file in the December 13, 2023 article on the CBC website.

Here’s a link to and a citation for the research paper,

A little on the nose: A mosquito targets the nostrils of tree frogs for a blood meal by John Gould, Jose W. Valdez. Ethology DOI: https://doi.org/10.1111/eth.13424 First published: 21 November 2023

This paper is open access.

Gifted dogs

Caption: Shira, 6 -year-old, female, Border Collie mix, that was rescued at a young age. She lives in New Jersey, and knows the names of 125 toys. Credit Photo: Tres Hanley-Millman

A December 14, 2023 news item on phys.org describes some intriguing research from Hungary,

All dog owners think that their pups are special. Science now has documented that some rare dogs are even more special. They have a talent for learning hundreds of names of dog toys. Due to the extreme rarity of this phenomenon, until recently, very little was known about these dogs, as most of the studies that documented this ability included only a small sample of one or two dogs.

A December 18,2023 Eötvös Loránd University (ELTE) press release (also on EurekAlert but published December 14, 2023), which originated the news item, delves further into the research,

In a previous study, the scientists found that only very few dogs could learn the names of object, mostly dog toys. The researchers wanted to understand this phenomenon better and, so they needed to find more dogs with this ability. But finding dogs with this rare talent was a challenge! For five years, the researchers tirelessly searched across the world for these unique Gifted Word Learner (GWL) dogs. As part of this search, in 2020, they launched a social media campaign and broadcasted their experiments with GWL dogs, in the hope of finding more GWL dogs.

“This was a citizen science project” explains Dr. Claudia Fugazza, team leader. “When a dog owner told us they thought their dog knew toy names, we gave them instructions on how to self-test their dog and asked them to send us the video of the test”. The researchers then held an online meeting with the owners to test the dog’s vocabulary under controlled conditions and, if the dog showed he knew the names of his toys, the researchers asked the owners to fill out a questionnaire. “In the questionnaire, we asked the owners about their dog’s life experience, their own experience in raising and training dogs, and about the process by which the dog came to learn the names of his/her toys” explains Dr. Andrea Sommese, co-author.

VIDEO ABSTRACT ABOUT THE RESEARCH

The researchers found 41 dogs from 9 different countries: the US, the UK, Brazil, Canada, Norway, Netherlands, Spain, Portugal and Hungary. Most of the previous studies on this topic included Border collies. So, while object label learning is very rare even in Border collies, it was not surprising that many of the dogs participating in the current study (56%) belonged to this breed. However, the study documented the ability to learn toy names in a few dogs from non-working breeds, such as two Pomeranians, one Pekingese, one Shih Tzu, a Corgi, a Poodle, and a few mixed breeds.

“Surprisingly, most owners reported that they did not intentionally teach their dogs toy names, but rather that the dogs just seemed to spontaneously pick up the toy names during unstructured play sessions,” says Shany Dror, lead researcher. In addition, the vast majority of owners participating in the study had no professional background in dog training and the researchers found no correlations between the owners’ level of experience in handling and training dogs, and the dogs’ ability to select the correct toys when hearing its names.

“In our previous studies we have shown that GWL dogs learn new object names very fast” explains Dror. “So, it is not surprising that when we conducted the test with the dogs, the average number of toys known by the dogs was 29, but when we published the results, more than 50% of the owners reported that their dogs had already acquired a vocabulary of over 100 toy names”.

“Because GWL dogs are so rare, until now there were only anecdotes about their background” explains Prof. Adam Miklósi, Head of the Ethology Department at ELTE and co-author. “The rare ability to learn object names is the first documented case of talent in a non-human species. The relatively large sample of dogs documented in this study, helps us to identify the common characteristics that are shared among these dogs, and brings us one step closer in the quest of understanding their unique ability”.

This research is part of the Genius Dog Challenge research project which aims to understand the unique talent that Gifted Word Learner dogs have. The researchers encourage dog owners who believe their dogs know multiple toy names, to contact them via the Genius Dog Challenge website.

Here’s a link to and a citation for the research paper,

A citizen science model turns anecdotes into evidence by revealing similar characteristics among Gifted Word Learner dogs by Shany Dror, Ádám Miklósi, Andrea Sommese & Claudia Fugazza. Scientific Reports volume 13, Article number: 21747 (2023) DOI: https://doi.org/10.1038/s41598-023-47864-5 Published: 14 December 2023

This paper is open access.

The End with an origin story NORAD’s Santa Tracker and some tap dancing

At the height of Cold War tensions between the US and Russia, the red phone (to be used only by the US president or a four star genera) rang at the North American Aerospace Defense Command (NORAD). Before the conversation ended, the colonel in charge had driven a child to tears and put in motion the start of a beloved Christmas tradition.

There’s a short version and a long version and if you want all the details read both,

As for the tap dancing, I have three links:

  1. Irish Dancers Face Off Against American Tap Dancers To Deliver EPIC Performance!” is an embedded 8 mins. dance off video (scroll down past a few paragraphs) in Erin Perri’s September 1, 2017 posting for themix.net. And, if you scroll further down to the bottom of Perri’s post, you’ll see an embedded video of Sammy Davis Jr.

In the video …, along with his dad and uncle, Sammy performs at an unbelievable pace. In the last 30 seconds of this routine, Sammy demonstrates more talent than other dancers are able to cram into a lifelong career! You can see these three were breakdancing long before it became a thing in the 1980s and they did it wearing tap shoes!

..

2. “Legendary Nicholas Brothers Dance Routine Was Unrehearsed and Filmed in One Take” embedded at the end of Emma Taggart’s October 4, 2019 posting on mymodernmet.com

3. Finally, there’s “Jill Biden releases extravagant dance video to celebrate Christmas at the White House” with a video file embedded (wait for it to finish loading and scroll down a few paragraphs) in Kate Fowler’s December 15, (?) 2023 article for MSN. It’s a little jazz, a little tap, and a little Christmas joy.

Joyeux Noël!

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the avtsim.com/weak-ai-strong-ai webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.

….

My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on mindmatters.ai.)

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from https://en.wikipedia.org/wiki/Vancouver_Art_Gallery#cite_note-canen-7]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 duckduckgo.com search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″

noun

1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the cosmolocal.org website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the cyberneticzoo.com September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from https://cyberneticzoo.com/cyberneticanimals/1949-wieners-moth-wiener-wiesner-singleton/]

Sculpture

This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from https://www.media.mit.edu/projects/synthetic-apiary/overview/]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

Night of ideas/Nuit des idées 2022: (Re)building Together on January 27, 2022 (7th edition in Canada)

Vancouver and other Canadian cities are participating in an international culture event, Night of ideas/Nuit des idées, organized by the French Institute (Institut de France), a French Learned society first established in 1795 (during the French Revolution, which ran from 1789 to 1799 [Wikipedia entry]).

Before getting to the Canadian event, here’s more about the Night of Ideas from the event’s About Us page,

Initiated in 2016 during an exceptional evening that brought together in Paris foremost French and international thinkers invited to discuss the major issues of our time, the Night of Ideas has quickly become a fixture of the French and international agenda. Every year, on the last Thursday of January, the French Institute invites all cultural and educational institutions in France and on all five continents to celebrate the free flow of ideas and knowledge by offering, on the same evening, conferences, meetings, forums and round tables, as well as screenings, artistic performances and workshops, around a theme each one of them revisits in its own fashion.

“(Re)building together

For the 7th Night of Ideas, which will take place on 27 January 2022, the theme “(Re)building together” has been chosen to explore the resilience and reconstruction of societies faced with singular challenges, solidarity and cooperation between individuals, groups and states, the mobilisation of civil societies and the challenges of building and making our objects. This Nuit des Idées will also be marked by the beginning of the French Presidency of the Council of the European Union.

According to the About Us page, the 2021 event counted participants in 104 countries/190 cities/with other 200 events.

The French embassy in Canada (Ambassade de France au Canada) has a Night of Ideas/Nuit des idées 2022 webpage listing the Canadian events (Note: The times are local, e.g., 5 pm in Ottawa),

Ottawa: (Re)building through the arts, together

Moncton: (Re)building Together: How should we (re)think and (re)habilitate the post-COVID world?

Halifax: (Re)building together: Climate change — Building bridges between the present and future

Toronto: A World in Common

Edmonton: Introduction of the neutral pronoun “iel” — Can language influence the construction of identity?

Vancouver: (Re)building together with NFTs

Victoria: Committing in a time of uncertainty

Here’s a little more about the Vancouver event, from the Night of Ideas/Nuit des idées 2022 webpage,

Vancouver: (Re)building together with NFTs [non-fungible tokens]

NFTs, or non-fungible tokens, can be used as blockchain-based proofs of ownership. The new NFT “phenomenon” can be applied to any digital object: photos, videos, music, video game elements, and even tweets or highlights from sporting events.

Millions of dollars can be on the line when it comes to NFTs granting ownership rights to “crypto arts.” In addition to showing the signs of being a new speculative bubble, the market for NFTs could also lead to new experiences in online video gaming or in museums, and could revolutionize the creation and dissemination of works of art.

This evening will be an opportunity to hear from artists and professionals in the arts, technology and academia and to gain a better understanding of the opportunities that NFTs present for access to and the creation and dissemination of art and culture. Jesse McKee, Head of Strategy at 221A, Philippe Pasquier, Professor at School of Interactive Arts & Technology (SFU) and Rhea Myers, artist, hacker and writer will share their experiences in a session moderated by Dorothy Woodend, cultural editor for The Tyee.

- 7 p.m on Zoom (registration here) Event broadcast online on France Canada Culture’s Facebook. In English.

Not all of the events are in both languages.

One last thing, if you have some French and find puppets interesting, the event in Victoria, British Columbia features both, “Catherine Léger, linguist and professor at the University of Victoria, with whom we will discover and come to accept the diversity of French with the help of marionnettes [puppets]; … .”

Digital aromas? And a potpourri of ‘scents and sensibility’

Mmm… smelly books. Illustration by Dorothy Woodend.[downloaded from https://thetyee.ca/Culture/2020/11/19/Smell-More-Important-Than-Ever/]

I don’t get to post about scent as often as I would like, although I have some pretty interesting items here, those links to follow towards of this post).

Digital aromas

This Nov. 11, 2020 Weizmann Institute of Science press release (also on EurekAlert published on Nov. 19, 2020) from Israel gladdened me,

Fragrances – promising mystery, intrigue and forbidden thrills – are blended by master perfumers, their recipes kept secret. In a new study on the sense of smell, Weizmann Institute of Science researchers have managed to strip much of the mystery from even complex blends of odorants, not by uncovering their secret ingredients, but by recording and mapping how they are perceived. The scientists can now predict how any complex odorant will smell from its molecular structure alone. This study may not only revolutionize the closed world of perfumery, but eventually lead to the ability to digitize and reproduce smells on command. The proposed framework for odors, created by neurobiologists, computer scientists, and a master-perfumer, and funded by a European initiative [NanoSmell] for Future Emerging Technologies (FET-OPEN), was published in Nature.

“The challenge of plotting smells in an organized and logical manner was first proposed by Alexander Graham Bell [emphasis mine] over 100 years ago,” says Prof. Noam Sobel of the Institute’s Neurobiology Department. Bell threw down the gauntlet: “We have very many different kinds of smells, all the way from the odor of violets [emphasis mine] and roses up to asafoetida. But until you can measure their likenesses and differences you can have no science of odor.” This challenge had remained unresolved until now.

This century-old challenge indeed highlighted the difficulty in fitting odors into a logical system: There are millions of odor receptors in our noses, consisting hundreds of different subtypes, each shaped to detect particular molecular features. Our brains potentially perceive millions of smells in which these single molecules are mixed and blended at varying intensities. Thus, mapping this information has been a challenge. But Sobel and his colleagues, led by graduate student Aharon Ravia and Dr. Kobi Snitz, found there is an underlying order to odors. They reached this conclusion by adopting Bell’s concept – namely to describe not the smells themselves, but rather the relationships between smells as they are perceived.

In a series of experiments, the team presented volunteer participants with pairs of smells and asked them to rate these smells on how similar the two seemed to one another, ranking the pairs on a similarity scale ranging from “identical” to “extremely different.” In the initial experiment, the team created 14 aromatic blends, each made of about 10 molecular components, and presented them two at a time to nearly 200 volunteers, so that by the end of the experiment each volunteer had evaluated 95 pairs.

To translate the resulting database of thousands of reported perceptual similarity ratings into a useful layout, the team refined a physicochemical measure they had previously developed. In this calculation, each odorant is represented by a single vector that combines 21 physical measures (polarity, molecular weight, etc.). To compare two odorants, each represented by a vector, the angle between the vectors is taken to reflect the perceptual similarity between them. A pair of odorants with a low angle distance between them are predicted similar, those with high angle distance between them are predicted different.

To test this model, the team first applied it to data collected by others, primarily a large study in odor discrimination by Bushdid [C. Bushdid] and colleagues from the lab of Prof. Leslie Vosshall at the Rockefeller Institute in New York. The Weizmann team found that their model and measurements accurately predicted the Bushdid results: Odorants with low angle distance between them were hard to discriminate; odors with high angle distance between them were easy to discriminate. Encouraged by the model accurately predicting data collected by others, the team continued to test for themselves.

The team concocted new scents and invited a fresh group of volunteers to smell them, again using their method to predict how this set of participants would rate the pairs – at first 14 new blends and then, in the next experiment, 100 blends. The model performed exceptionally well. In fact, the results were in the same ballpark as those for color perception – sensory information that is grounded in well-defined parameters. This was especially surprising considering each individual likely has a unique complement of smell receptor subtypes, which can vary by as much as 30% across individuals.

Because the “smell map,” [emphasis mine] or “metric” predicts the similarity of any two odorants, it can also be used to predict how an odorant will ultimately smell. For example, any novel odorant that is within 0.05 radians or less from banana will smell exactly like banana. As the novel odorant gains distance from banana, it will smell banana-ish, and beyond a certain distance, it will stop resembling banana.

The team is now developing a web-based tool. This set of tools not only predicts how a novel odorant will smell, but can also synthesize odorants by design. For example, one can take any perfume with a known set of ingredients, and using the map and metric, generate a new perfume with no components in common with the original perfume, but with exactly the same smell. Such creations in color vision, namely non-overlapping spectral compositions that generate the same perceived color, are called color metamers, and here the team generated olfactory metamers.

The study’s findings are a significant step toward realizing a vision of Prof. David Harel of the Computer and Applied Mathematics Department, who also serves as Vice President of the Israel Academy of Sciences and Humanities and who was a co-author of the study: Enabling computers to digitize and reproduce smells. In addition, of course, to being able to add realistic flower or sea aromas to your vacation pictures on social media, giving computers the ability to interpret odors in the way that humans do could have an impact on environmental monitoring and the biomedical and food industries, to name a few. Still, master perfumer Christophe Laudamiel, who is also a co-author of the study, remarks that he is not concerned for his profession just yet.

Sobel concludes: “100 years ago, Alexander Graham Bell posed a challenge. We have now answered it: The distance between rose and violet is 0.202 radians (they are remotely similar), the distance between violet and asafoetida is 0.5 radians (they are very different), and the difference between rose and asafoetida is 0.565 radians (they are even more different). We have converted odor percepts into numbers, and this should indeed advance the science of odor.”

I emphasized Alexander Graham Bell and the ‘smell map’ because I thought they were interesting and violets because they will be mentioned again later in this post.

Meanwhile, here’s a link to and a citation for the paper (the proposed framework for odors),

A measure of smell enables the creation of olfactory metamers by Aharon Ravia, Kobi Snitz, Danielle Honigstein, Maya Finkel, Rotem Zirler, Ofer Perl, Lavi Secundo, Christophe Laudamiel, David Harel & Noam Sobel. Nature volume 588, pages 118–123 (2020) DOI: https://doi.org/10.1038/s41586-020-2891-7 Published online: 11 November 2020 Journal Issue Date: 03 December 2020

This paper is behind a paywall.

Smelling like an old book

Some folks are missing the smell of bookstores and according to Dorothy Woodend’s Nov. 19, 2020 article for The Tyee, that longing has resulted in a perfume (Note: Links have been removed),

The news that Powell’s Books, Portland’s (Oregon, US) beloved bookstore, had released a signature scent was greeted with bemusement by some, confusion by others. But to me it made perfect scents. (Err, sense.) If you love something, I mean really love it, you love the way it smells.

Old books have a distinctive peppery aroma that draws bibliophiles like bears to honey. Some people are very specific about their book smells, preferring vintage Penguin paperbacks from the mid to late 1960s. Those orange spines aged like fine wine.

Powell’s created the scent after people complained about missing the smell of the store during lockdown. It got me thinking about how identity is often bound up with smell and, more widely, how smells belong to cultural, even historic moments.

Olfactory obsolescence can have weird side effects … . Memories of one’s grandfather smelling like pipe tobacco are pretty much now only a literary conceit. But pipe smoke isn’t the only dinosaur smell that is going extinct. Even in my lifetime, I remember the particular aroma of baseball cards and chalk dust.

Remember violets? Here’s more about Powell’s Unisex Fragrance (from Powell’s purchase webpage),

Notes:
• Wood
• Violet
• Biblichor

Description:
Like the crimson rhododendrons in Rebecca, the heady fragrance of old paper creates an atmosphere ripe with mood and possibility. Invoking a labyrinth of books; secret libraries; ancient scrolls; and cognac swilled by philosopher-kings, Powell’s by Powell’s delivers the wearer to a place of wonder, discovery, and magic heretofore only known in literature.

How to wear:
This scent contains the lives of countless heroes and heroines. Apply to the pulse points when seeking sensory succor or a brush with immortality.

Details:
• 1 ounce
• Glass bottle
• Limited-edition item available while supplies last

Shipping details:
Powell’s Unisex Fragrance ships separately and only in the contiguous United States [emphasis mine]. Special shipping rates apply.

Links: oPhone and heritage smells

Some years I was quite intrigued by the oPhone (scent by telephone) and wrote these: For the smell of it, a Feb. 14, 2014 posting, and Smelling Paris in New York (update on the oPhone), a June 18, 2014 posting. I haven’t found any updates about oPhone in my brief searches on the web.

There was a previous NANOSMELL (sigh, these projects have various approaches to capitalization) posting: Scented video games: a nanotechnology project in Europe published here in a May 27, 2016 posting.

More recently on the smell front, there was this May 22, 2017 posting, Preserving heritage smells (scents). FYI, the authors of the 2017 paper are part of the Odeuropa project described in the next subsection.

Context: NanoSmell and Odeuropa

Science funding is intimately linked to science policy. Examination of science funding can be useful for understanding some of the contrasts between how science is conducted in different jurisdictions, e.g., Europe and Canada.

Before launching into the two ‘scent’ projects, NanoSmell and Odeuropa, I’m offering a brief description of one of the European Union’s (EU) most comprehensive and substantive (many, many Euros) science funding initiatives.The latest iteration of this initiative has funded and is funding both NanoSmell and Odeuropa.

Horizon Europe

The initiative has gone under different names: Framework Programmes 1-7, then in 2014, it was called Horizon 2020 with its end date part of its name. The latest initiative, Horizon Europe is destined to start in 2021 and end in 2027.

The most recent Horizon Europe budget information I’ve been able to find is in this Nov. 10, 2020 article by Éanna Kelly and Goda Naujokaitytė for ScienceBusiness.net,

EU governments and the European Parliament on Tuesday [Nov. 10, 2020] afternoon announced an extra €4 billion will be added to the EU’s 2021-2027 research budget, following one-and-a-half days of intense negotiations in Brussels.

The deal, which still requires a final nod from parliament and member states, puts Brussels closer to implementing its gigantic €1.8 trillion budget and COVID-19 recovery package. [emphasis mine]

In all, a series of EU programmes gained an additional €15 billion. Among them, the student exchange programme Erasmus+ went up by €2.2 billion, health spending in EU4Health by €3.4 billion, and the InvestEU programme got an additional €1 billion.

Parliamentarians have been fighting to reverse cuts [emphasis mine] made to science and other investment programmes since July [2020], when EU leaders settled on €80.9 billion (at 2018 prices) for Horizon Europe, significantly less than €94.4 billion proposed by the European Commission.

“I am really proud that we fought – all six of us as a team,” said van Overtveldt [Johan Van Overtveldt, Belgian MEP {member of European Parliament} on the budget committee], pointing to the other budget MEPs who headed talks with the German Presidency of the Council. “You can take the term ‘fight’ literally. We had to fight for what we got.”

“We are all very proud of what we achieved, not for the parliament’s pride but in the interest of European citizens short-term and long-term,” van Overveldt said.

One of the most visible campaigners for science in the Parliament, MEP Christian Ehler, spokesman on Horizon Europe for the European Peoples’ Party, called the deal “a victory for researchers, scientists and citizens alike.” [emphasis mine]

The challenge now for negotiators will be to figure out how to divide extra funds [emphasis mine] within Horizon Europe fairly, with officials attached to public-private partnerships, the European Research Council, the new research missions, and the European Innovation Council all baying for more cash.

To sum up, in July 2020, legislators settled on the figure of €80.9 billion for science funding over the seven year period of 2021 – 2027 to administered by Horizon Europe. After fighting €4 billion was added for a total of €84.9 billion in research funding over the next seven years.

This is fascinating to me; I don’t recall ever seeing any mention of Canadian legislators arguing over how much money should be allocated to research in articles about the Canadian budget. The usual approach is treat the announcement as a fait accompli and a matter for celebration or intense criticism.

Smell of money?

All this talk of budgets and heritage smells has me thinking about the ‘smell of money’. What happens as money or currency becomes virtual rather than actual? And, what happened to the smell of Canadian money which is now made of plastic?

I haven’t found any answers to those questions but I did find an interesting June 14, 2012 article by Sarah Gardner for Marketplace.org titled, Sniffing out what money smells like. The focus is on money made of cotton and linen. One other note, this is not the Canadian Broadcasting Corporation’s Marketplace television programme. This is a US programme from American Public Media (from the Markeplace.org FAQs webpage).

Now onto the funding for European smell research.

NanoSmell

The Israeli researchers’ work was funded by Horizon 2020’s NanoSmell project which ran from Sept. 1, 2015 – August 31, 2019 and this was their objective (from the CORDIS NanoSmell project page),

“Despite years of promise, an odor-emitting component in devices such as televisions, phones, computers and more has yet to be developed. Two major obstacles in the way of such development are poor understanding of the olfactory code (the link between odorant structure, neural activity, and odor perception), and technical inability to emit odors in a reversible manner. Here we propose a novel multidisciplinary path to solving this basic scientific question (the code), and in doing so generate a solution to the technical limitation (controlled odor emission). The Bachelet lab will design DNA strands that assume a 3D structure that will specifically bind to a single type of olfactory receptor and induce signal transduction. These DNA-based “”artificial odorants”” will be tagged with a nanoparticle that changes their conformation in response to an external electromagnetic field. Thus, we will have in hand an artificial odorant that is remotely switchable. The Hansson lab will use tissue culture cells expressing insect olfactory receptors, functional imaging, and behavioral tests to validate the function and selectivity of these switchable odorants in insects. The Carleton lab will use imaging in order to investigate the patterns of neural activity induced by these artificial odorants in rodents. The Sobel lab will apply these artificial odorants to the human olfactory system, [emphasis mine] and measure perception and neural activity following switching the artificial smell on and off. Finally, given a potential role for olfactory receptors in skin, the Del Rio lab will test the efficacy of these artificial odorants in promoting wound healing. At the basic science level, this approach may allow solving the combinatorial code of olfaction. At the technology level, beyond novel pharmacology, we will provide proof-of-concept for countless novel applications ranging from insect pest-control to odor-controlled environments and odor-emitting devices such as televisions, phones, and computers.” [emphasis mine]

Unfortunately, I can’t find anything on the NanoSmell Project Results page with links to any proof-of-concept publications or pilot projects for the applications mentioned. Mind you, I wouldn’t have recognized the Israeli team’s A measure of smell enables the creation of olfactory metamers as a ‘smell map’.

Odeuropa

Remember the ‘heritage smells’ 2017 posting? The research paper listed there has two authors, both of whom form one of the groups (University College London; scroll down) associated with Odeuropa’s Horizon 2020 project announced in a Nov. 17, 2020 posting by the project lead, Inger Leemans on the Odeuropa website (Note: A link has been removed),

The Odeuropa consortium is very proud to announce that it has been awarded a €2.8M grant from the EU Horizon 2020 programme for the project, “ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research”.Smell is an urgent topic which is fast gaining attention in different communities. Amongst the questions the Odeuropa project will focus on are: what are the key scents, fragrant spaces, and olfactory practices that have shaped our cultures? How can we extract sensory data from large-scale digital text and image collections? How can we represent smell in all its facets in a database? How should we safeguard our olfactory heritage? And — why should we? …

The project bundles an array of academic expertise from across many disciplines—history, art history, computational linguistics, computer vision, semantic web, museology, heritage science, and chemistry, with further expertise from cultural heritage institutes, intangible heritage organisations, policy makers, and the creative and fragrance industries.

I’m glad to see this interest in scent, heritage, communication, and more. Perhaps one day we’ll see similar interest here in Canada. Subtle does not mean unimportant, eh?

A dance with love and fear: the Yoko Ono exhibit and the Takashi Murakami exhibit in Vancouver (Canada)

It seems Japanese artists are ‘having a moment’. There’s a documentary (Kusama—Infinity) about contemporary Japanese female artist, Yayoi Kusama, making the festival rounds this year (2018). Last year (2017), the British Museum mounted a major exhibition of Hokusai’s work (19th Century) and in 2017, the Metropolitan Museum of Art Costume Institute benefit was inspired by a Japanese fashion designer, “Rei Kawakubo/Comme des Garçons: Art of the In-Between.” (A curator at the Japanese Garden in Portland who had lived in Japan for a number of years mentioned to me during an interview that the Japanese have one word for art. There is no linguistic separation between art and craft.)

More recently, both Yoko Ono and Takashi Murakami have had shows in Vancouver, Canada. Starting with fear as I prefer to end with love, Murakami had a blockbuster show at the Vancouver Gallery.

Takashi Murakami: a dance with fear (and money too)

In the introductory notes at the beginning of the exhibit: “Takashi Murakami: The Octopus Eats Its own Leg,” it was noted that fear is one of Murakami’s themes. The first few pieces in the show had been made to look faded and brownish to the point where you had to work at seeing what was underneath the layers. The images were a little bit like horror films something’s a bit awry then scary and you don’t know what it is or how to deal with it.

After those images, the show opened up to bright, bouncy imagery commonly associated with Mrjakami’s work. However, if you look at them carefully, you’ll see many of these characters have big, pointed teeth. Also featured was a darkened room with two huge warriors.At a guess, I’d say they were 14 feet tall.

It  made for a disconcerting show with its darker themes usually concealed in bright, vibrant colour. Here’s an image promoting Murakami’s Vancouver birthday celebration and exhibit opening,

‘Give me the money, now!’ says a gleeful Takashi Murakami, whose expansive show is currently at the Vancouver Art Gallery. Photo by the VAG. [downloaded from https://thetyee.ca/Culture/2018/02/07/Takashi-Murakami-VAG/]

The colours and artwork shown in the marketing materials (I’m including the wrapping on the gallery itself) were  exuberant as was Murakami who acted as his own marketing material. I’m mentioning the money It’s very intimately and blatantly linked to Murakami’s art and work.  Dorothy Woodend in a Feb. 7, 2018 article for The Tyee puts it this way (Note: Link have been removed),

The close, almost incestuous relationship between art and money is a very old story. [emphasis mine] You might even say it is the only story at the moment.

You can know this, understand it to a certain extent, and still have it rear up and bite you on the bum. [emphasis mine] Such was my experience of attending the exhibition preview of Takashi Murakami’s The Octopus Eats Its Own Leg at the Vancouver Art Gallery.

The show is the first major retrospective of Murakami’s work in Canada, and the VAG has spared no expense in marketing the living hell out of the thing. From the massive cephalopod installed atop the dome of the gallery, to the ocean of smiling cartoon flowers, to the posters papering every inch of downtown Vancouver, it is in a word: huge.

If you don’t know much about Murakami the show is illuminating, in many different ways. Expansive in extremis, the exhibition includes more than 50 works that trace a path through the evolution of Murakami’s style and aesthetic, moving from his early dark textural paintings that blatantly ripped off Anselm Kiefer, to his later pop-art style (Superflat), familiar from Kanye West albums and Louis Vuitton handbags.

make no mistake, money runs underneath the VAG show like an engine [emphasis mine]. You can feel it in the air, thrumming with a strange radioactive current, like a heat mirage coming off the people madly snapping selfies next to the Kanye Bear sculpture.

The artist himself seems particularly aware of how much of a financial edifice surrounds the human impulse to make images. In an on-stage interview with senior VAG [Vancouver Art Gallery] curator Bruce Grenville during a media preview for the show, Murakami spoke plainly about the need for survival (a.k.a. money) [emphasis mine] that has propelled his career.

Even the title of the show speaks to the notion of survival (from Woodend’s article; Note: Links have been removed),

The title of the show takes inspiration from Japanese folklore about a creature that sacrifices part of its own body so that the greater whole might survive. In the natural world, an octopus will chew off its own leg if there is an infection, and then regrow the missing limb. In the art world, the idea pertains to the practice of regurgitating (recycling) old ideas to serve the endless voracious demand for new stuff. “I don’t have the talent to come up with new ideas, so in order to survive, you have to eat your own body,” Murakami explains, citing his need for deadlines, and very bad economic conditions, that lead to a state of almost Dostoyevskyian desperation. “Please give me the money now!” he yells, and the assembled press laughs on cue.

The artist’s responsibility to address larger issues like gender, politics and the environment was the final question posed during the Q&A, before the media were allowed into the gallery to see the work. Murakami took his time before answering, speaking through the nice female translator beside him. “Artists don’t have that much power in the world, but they can speak to the audience of the future, who look at the artwork from a certain era, like Goya paintings, and see not just social commentary, but an artistic point of view. The job of the artist is to dig deep into human beings.”

Which is a nice sentiment to be sure, but increasingly art is about celebrity and profit. Record-breaking shows like Alexander McQueen’s Savage Beauty and Rei Kawakubo/Comme des Garçons: Art of the In-Between demonstrated an easy appeal for both audiences and corporations. One of Murakami’s earlier exhibitions featured a Louis Vuitton pop-up shop as part of the show. Closer to home, the Fight for Beauty exhibit mixed fashion, art and development in a decidedly queasy-making mixture.

There is money to be made in culture of a certain scale, with scale being the operative word. Get big or get out.

Woodend also relates the show and some of the issues it raises to the local scene (Note: Links have been removed),

A recent article in the Vancouver Courier about the Oakridge redevelopment plans highlighted the relationship between development and culture in raw numbers: “1,000,000 square feet of retail, 2,600 homes for 6,000 people, office space for 3,000 workers, a 100,000-square-foot community centre and daycare, the city’s second-largest library, a performing arts academy, a live music venue for 3,000 people and the largest public art program in Vancouver’s history…”

Westbank’s Ian Gillespie [who hosted the Fight for Beauty exhibit] was quoted extensively, outlining the integration between the city and the developer. “The development team will also work with the city’s chief librarian to figure out the future of the library, while the 3,000-seat music venue will create an ‘incredible music scene.’” The term “cultural hub” also pops up so many times it’s almost funny, in a horrifying kind of way.

But bigness often squeezes out artists and musicians who simply can’t compete. Folk who can’t fill a 3,000-seat venue, or pack in thousands of visitors, like the Murakami show, are out of luck.

Vancouver artists, who struggle to survive in the city and have done so for quite some time, were singularly unimpressed with the Oakridge development proposal. Selina Crammond, a local musician and all-around firebrand, summed up the divide in a few eloquent sentences: “I mean really, who is going to make up this ‘incredible music scene’ and fill all of these shiny new venues? Many of my favourite local musicians have already moved away from Vancouver because they just can’t make it work. Who’s going to pay the musicians and workers? Who’s going to pay the large ticket prices to be able to maintain these spaces? I don’t think space is the problem. I think affordability and distribution of wealth and funding are the problems artists and arts workers are facing.”

The stories continue to pop up, the most recent being the possible sale and redevelopment of the Rio Theatre. The news sparked an outpouring of anger, but the story is repeated so often in Vancouver, it has become something of a cliché. You need only to look at the story of the Hollywood Theatre for a likely ending to the saga.

Which brings me back around to the Murakami exhibit. To be perfectly frank, the show is incredible and well-worth visiting. I enjoyed every minute of wandering through it taking in the sheer expanse of mind-boggling, googly-eyed detail. I would urge you to attend, if you can afford it. But there’s the rub. I was there for free, and general admission to the VAG is $22.86. This may not seem like a lot, but in a city where people can barely make rent, culture becomes the purview of them that can afford it.

The City of Vancouver recently launched its Creative Cities initiative to look at issues of affordability, diversity and gentrification.

We shall see if anything real emerges from the process. But in the meantime, Vancouver artists might have to eat their own legs simply to survive. [Tyee]

Survival issues and their intimate companions, fear, are clearly a major focus for Murakami’s art.

For the curious, the Vancouver version of the Murakami retrospective show was held from February 3 – May 6, 2018. There are still some materials about the show available online here.

Yoko Ono and the power of love (and maybe money, too)

More or less concurrently with the Murakami exhibition, the Rennie Museum (formerly Rennie Collection), came back from a several month hiatus to host a show featuring Yoko Ono’s “Mend Piece.”

From a Rennie Museum (undated) press release,

Rennie Museum is pleased to present Yoko Ono’s MEND PIECE, Andrea Rosen Gallery, New York City version (1966/2015). Illustrating Ono’s long standing artistic quest in social activism and world peace, this instructional work will transform the historic Wing Sang building into an intimate space for creative expression and bring people together in an act of collective healing and meditation. The installation will run from March 1 to April 15, 2018.

First conceptualized in 1966, the work immerses the visitor in a dream-like state. Viewers enter into an all-white space and are welcomed to take a seat at the table to reassemble fragments of ceramic coffee cups and saucers using the provided twine, tape, and glue. Akin to the Japanese philosophy of Wabi-sabi, an embracing of the flawed or imperfect, Mend Piece encourages the participant to transform broken fragments into an object that prevails its own violent rupture. The mended pieces are then displayed on shelves installed around the room. The contemplative act of mending is intended to promote reparation starting within one’s self and community, and bridge the gap created by violence, hatred, and war. In the words of Yoko Ono herself, “Mend with wisdom, mend with love. It will mend the earth at the same time.”

The installation of MEND PIECE, Andrea Rosen Gallery, New York City version at Rennie Museum will be accompanied by an espresso bar, furthering the notions of community and togetherness.

Yoko Ono (b. 1933) is a Japanese conceptual artist, musician, and peace activist pioneering feminism and Fluxus art. Her eclectic oeuvre of performance art, paintings, sculptures, films and sound works have been shown at renowned institutions worldwide, with recent exhibitions at The Museum of Modern Art, New York; Copenhagen Contemporary, Copenhagen; Museum of Contemporary Art, Tokyo; and Museo de Arte Latinoamericano de Buenos Aires. She is the recipient of the 2005 IMAJINE Lifetime Achievement Award and the 2009 Venice Biennale Golden Lion for Lifetime Achievement, among other distinctions. She lives and works in New York City.

While most of the shows have taken place over two, three, or four floors, “Mend Piece” was on the main floor only,

Courtesy: Rennie Museum

There was another “Mend Piece” in Canada, located at the Gardiner Museum and part of a larger show titled: “The Riverbed,” which ran from February 22 to June 3, 2018. Here’s an image of one of the Gardiner Museum “Mend” pieces that was featured in a March 7, 2018 article by Sonya Davidson for the Toronto Guardian,

Yoko Ono, Mend Piece, 1966 / 2018, © Yoko Ono. Photo: Tara Fillion Courtesy: Toronto Guardian

Here’s what Davidson had to say about the three-part installation, “The Riverbed,”

I’m sitting  on one of the cushions placed on the floor watching the steady stream of visitors at Yoko Ono’s exhibition The Riverbed at the Gardiner Museum. The room is airy and bright but void of  colours yet it’s vibrant and alive in a calming way. There are three distinct areas in this exhibition: Stone Piece, Line Piece and Mend Piece. From what I’ve experienced in Ono’s previous exhibitions, her work encourages participation and is inclusive of everyone. She has the idea. She encourages us to  go collaborate with her. Her work is describe often as  redirecting our attention to ideas, instead of appearances.

Mend Piece is the one I’m most familiar with. It was part of her exhibition I visited in Reykjavik [Iceland]. Two large communal tables are filled with broken ceramic pieces and mending elements. Think glue, string, and tape.  Instructions from Ono once again are simple but with meaning. Take the pieces that resonate with you and mend them as you desire. You’re encourage [sic] to leave it in the communal space for everyone to experience what you’ve experienced. It reminded me of her work decades ago where she shattered porcelain vases, and people invited people to take a piece with them. But then years later she collected as many back and mended them herself. Part contemporary with a nod to the traditional Japanese art form of Kintsugi – fixing broken pottery with gold and the philosophy of nothing is ever truly broken. The repairs made are part of the history and should be embraced with honour and pride.

The experience at the Rennie was markedly different . I recommend reading both Davidson’s piece (includes many embedded images) in its entirety to get a sense for how different and this April 7, 2018 article by Jenna Moon for The Star regarding the theft of a stone from The Riverbed show at the Gardiner,

A rock bearing Yoko Ono’s handwriting has been stolen from the Gardiner Museum, Toronto police say. The theft reportedly occurred around 5:30 p.m. on March 12.

The rock is part of an art exhibit featuring Ono, where patrons can meditate using several river rocks. The stone is inscribed with black ink, and reads “love yourself” in block letters. It is valued at $17,500 (U.S.), [emphasis mine] Toronto police media officer Gary Long told the Star Friday evening.

As far as I can tell, they still haven’t found the suspect who was described as a woman between the ages of 55 and 60. However the question that most interests me is how did they arrive at a value for the stone? Was it a case of assigning a value to the part of the installation with the stones and dividing that value by the number of stones? Yoko Ono may focus her art on social activism and peace but she too needs money to survive. Moving on.

Musings on ‘mend’

Participating in “Mend Piece” at the Rennie Museum was revelatory. It was a direct experience of the “traditional Japanese art form of Kintsugi – fixing broken pottery with gold and the philosophy of nothing is ever truly broken.” So often art is at best a tertiary experience for the viewer. The artist has the primary experience producing the work and the curator has the secondary experience of putting the show together.

For all the talk about interactive installations and pieces, there are few that truly engage the viewer with the piece. I find this rule applies: the more technology, the less interactivity.

“Mend” insisted on interactivity. More or less. I went with a friend and sat beside the one person in the group who didn’t want to talk to anyone. And she wasn’t just quiet, you could feel the “don’t talk to me” vibrations pouring from every one of her body parts.

The mending sessions were about 30 minutes long and, as Davidson notes, you had string, two types of glue, and twine. For someone with any kind of perfectionist tendencies (me) and a lack of crafting skills (me), it proved to be a bit of a challenge, especially with a semi-hostile person beside me. Thank goodness my friend was on the other side.

Adding to my travails was the gallery assistant (a local art student) who got very anxious and hovered over me as I attempted and failed to set my piece on a ledge in the room (twice). She was very nice and happy to share, without being intrusive, information about Yoko Ono and her work while we were constructing our pieces. I’m not sure what she thought was going to happen when I started dropping things but her hovering brought back memories of my adolescence when shopkeepers would follow me around their store.

Most of my group had finished and even though there was still time in my session, the next group rushed in and took my seat while I failed for the second time to place my piece. I stood for my third (and thankfully successful) repair attempt.

At that point I went to the back where more of the “Mend” communal experience awaited. Unfortunately, the coffee bar’s (this put up especially for the show) espresso machine was not working. There was some poetry on the walls and a video highlighting Yoko Ono’s work over the years and the coffee bar attendant was eager to share (but not intrusively so) some information about Yoko and her work.

As I stated earlier, it was a revelatory experience. First, It turned out my friend had been following Yoko’s work since before the artist had hooked up with John Lennon and she was able to add details to the attendants’ comments.

Second, I didn’t expect was a confrontation with the shards of my past and personality. In essence, mending myself and, hopefully, more. There was my perfectionism, rejection by the unfriendly tablemate, my emotional response (unspoken) to the hypervigilant gallery assistant, having my seat taken from me before the time was up, and the disappointment of the coffee bar. There was also a rediscovery of my friend, a friendly tablemate who made a beautiful object (it looked like a bird), the helpfulness of both the gallery assistants, Yoko Ono’s poetry, and a documentary about the remarkable Yoko.

All in all, it was a perfect reflection of imperfection (wabi-sabi), brokenness, and wounding in the context of repair (Kintsugi)/healing.

Thank you, Yoko Ono.

For anyone in Vancouver who feels they missed out on the experience, there are some performances of “Perfect Imperfections: The Art of a Messy Life” (comedy, dance, and live music) at Vancity Culture Lab at The Cultch from June 14 – 16, 2018. You can find out more here.

The moment

It certainly seems as if there’s a great interest in Japanese art, if you live in Vancouver (Canada), anyway. The Murakami show was a huge success for the Vancouver Art Gallery. As for Yoko Ono, the Rennie Museum extended the exhibit dates due to demand. Plus, the 2018 – 2020 version of the Vancouver Biennale is featuring (from a May 29, 2018 Vancouver Biennale news release),

… Yoko Ono with its 2018 Distinguished Artist Award, a recognition that coincides with reissuing the acclaimed artist’s 2007 Biennale installation, “IMAGINE PEACE,” marshalled at this critical time to re-inspire a global consciousness towards unity, harmony, and accord. Yoko Ono’s project exemplifies the Vancouver Biennale’s mission for diverse communities to gain access, visibility and representation.

The British Museum’s show (May 25 – August 13, 2017), “Hokusai’s Great Wave,” was seen in Vancouver at a special preview event in May 2017 at a local movie house, which was packed.

The documentary film festival, DOXA (Vancouver) closed its 2018 iteration with the documentary about Yayoi Kusama. Here’s more about her from a May 9, 2018 article by Janet Smith for the Georgia Straight,

Amid all the dizzying, looped-and-dotted works that American director Heather Lenz has managed to capture in her new documentary Kusama—Infinity, perhaps nothing stands out so much as images of the artist today in her Shinjuku studio.

Interviewed in the film, the 89-year-old Yayoi Kusama sports a signature scarlet bobbed anime wig and hot-pink polka-dotted dress, sitting with her marker at a drawing table, and set against the recent creations on her wall—a sea of black-and-white spots and jaggedy lines.

“The boundary between Yayoi Kusama and her art is not very great,” Lenz tells the Straight from her home in Orange County. “They are one and the same.”

It was as a young student majoring in art history and fine art that Lenz was first drawn to Kusama—who stood out as one of few female artists in her textbooks. She saw an underappreciated talent whose avant-pop works anticipated Andy Warhol and others. And as Lenz dug deeper into the artist’s story, she found a woman whose struggles with a difficult childhood and mental illness made her achievements all the more remarkable.

Today, Kusama is one of the world’s most celebrated female artists, her kaleidoscopic, multiroom show Infinity Mirrors drawing throngs of visitors to galleries like the Art Gallery of Ontario and the Seattle Art Museum over the past year. But when Lenz set out to make her film 17 long years ago, few had ever heard of Kusama.

I am hopeful that this is a sign that the Vancouver art scene is focusing more attention to the west, to Asia. Quite frankly, it’s about time.

As a special treat, here’s a ‘Yoko Ono tribute’ from the Bare Naked Ladies,

Dance!