Category Archives: biomimcry

Spiders can outsource hearing to their webs

A March 29, 2022 news item on ScienceDaily highlights research into how spiders hear,

Everyone knows that humans and most other vertebrate species hear using eardrums that turn soundwave pressure into signals for our brains. But what about smaller animals like insects and arthropods? Can they detect sounds? And if so, how?

Distinguished Professor Ron Miles, a Department of Mechanical Engineering faculty member at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science, has been exploring that question for more than three decades, in a quest to revolutionize microphone technology.

A newly published study of orb-weaving spiders — the species featured in the classic children’s book “Charlotte’s Web” — has yielded some extraordinary results: The spiders are using their webs as extended auditory arrays to capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

Binghamton University (formal name: State University of New York at Binghamton) has made this fascinating (to me anyway) video available,

Binghamton University and Cornell University (also in New York state) researchers worked collaboratively on this project. Consequently, there are two news releases and there is some redundancy but I always find that information repeated in different ways is helpful for learning.

A March 29, 2022 Binghamton University news release (also on EurekAlert) by Chris Kocher gives more detail about the work (Note: Links have been removed),

It is well-known that spiders respond when something vibrates their webs, such as potential prey. In these new experiments, researchers for the first time show that spiders turned, crouched or flattened out in response to sounds in the air.

The study is the latest collaboration between Miles and Ron Hoy, a biology professor from Cornell, and it has implications for designing extremely sensitive bio-inspired microphones for use in hearing aids and cell phone

Jian Zhou, who earned his PhD in Miles’ lab and is doing postdoctoral research at the Argonne National Laboratory, and Junpeng Lai, a current PhD student in Miles’ lab, are co-first authors. Miles, Hoy and Associate Professor Carol I. Miles from the Harpur College of Arts and Sciences’ Department of Biological Sciences at Binghamton are also authors for this study. Grants from the National Institutes of Health to Ron Miles funded the research.

A single strand of spider silk is so thin and sensitive that it can detect the movement of vibrating air particles that make up a soundwave, which is different from how eardrums work. Ron Miles’ previous research has led to the invention of novel microphone designs that are based on hearing in insects.

“The spider is really a natural demonstration that this is a viable way to sense sound using viscous forces in the air on thin fibers,” he said. “If it works in nature, maybe we should have a closer look at it.”

Spiders can detect miniscule movements and vibrations through sensory organs on their tarsal claws at the tips of their legs, which they use to grasp their webs. Orb-weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used Binghamton University’s anechoic chamber, a completely soundproof room under the Innovative Technologies Complex. Collecting orb-weavers from windows around campus, they had the spiders spin a web inside a rectangular frame so they could position it where they wanted.

The team began by using pure tone sound 3 meters away at different sound levels to see if the spiders responded or not. Surprisingly, they found spiders can respond to sound levels as low as 68 decibels. For louder sound, they found even more types of behaviors.

They then placed the sound source at a 45-degree angle, to see if the spiders behaved differently. They found that not only are the spiders localizing the sound source, but they can tell the sound incoming direction with 100% accuracy.

To better understand the spider-hearing mechanism, the researchers used laser vibrometry and measured over one thousand locations on a natural spider web, with the spider sitting in the center under the sound field. The result showed that the web moves with sound almost at maximum physical efficiency across an ultra-wide frequency range.

“Of course, the real question is, if the web is moving like that, does the spider hear using it?” Miles said. “That’s a hard question to answer.”

Lai added: “There could even be a hidden ear within the spider body that we don’t know about.”

So the team placed a mini-speaker 5 centimeters away from the center of the web where the spider sits, and 2 millimeters away from the web plane — close but not touching the web. This allows the sound to travel to the spider both through air and through the web. The researchers found that the soundwave from the mini-speaker died out significantly as it traveled through the air, but it propagated readily through the web with little attenuation. The sound level was still at around 68 decibels when it reached the spider. The behavior data showed that four out of 12 spiders responded to this web-borne signal.

Those reactions proved that the spiders could hear through the webs, and Lai was thrilled when that happened: “I’ve been working on this research for five years. That’s a long time, and it’s great to see all these efforts will become something that everybody can read.”

The researchers also found that, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies. By using this external structure to hear, the spider could be able to customize it to hear different sorts of sounds.

Future experiments may investigate how spiders make use of the sound they can detect using their web. Additionally, the team would like to test whether other types of web-weaving spiders also use their silk to outsource their hearing.

“It’s reasonable to guess that a similar spider on a similar web would respond in a similar way,” Ron Miles said. “But we can’t draw any conclusions about that, since we tested a certain kind of spider that happens to be pretty common.”

Lai admitted he had no idea he would be working with spiders when he came to Binghamton as a mechanical engineering PhD student.

“I’ve been afraid of spiders all my life, because of their alien looks and hairy legs!” he said with a laugh. “But the more I worked with spiders, the more amazing I found them. I’m really starting to appreciate them.”

A March 29, 2022 Cornell University news release (also on EurekAlert but published March 30, 2022) by Krishna Ramanujan offers a somewhat different perspective on the work, Note: Links have been removed)

Charlotte’s web is made for more than just trapping prey.

A study of orb weaver spiders finds their massive webs also act as auditory arrays that capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

In experiments, the researchers found the spiders turned, crouched or flattened out in response to sounds, behaviors that spiders have been known to exhibit when something vibrates their webs.

The paper, “Outsourced Hearing in an Orb-weaving Spider That Uses its Web as an Auditory Sensor,” published March 29 [2022] in the Proceedings of the National Academy of Sciences, provides the first behavioral evidence that a spider can outsource hearing to its web.

The findings have implications for designing bio-inspired extremely sensitive microphones for use in hearing aids and cell phones.

A single strand of spider silk is so thin and sensitive it can detect the movement of vibrating air particles that make up a sound wave. This is different from how ear drums work, by sensing pressure from sound waves; spider silk detects sound from nanoscale air particles that become excited from sound waves.

“The individual [silk] strands are so thin that they’re essentially wafting with the air itself, jostled around by the local air molecules,” said Ron Hoy, the Merksamer Professor of Biological Science, Emeritus, in the College of Arts and Sciences and one of the paper’s senior authors, along with Ronald Miles, professor of mechanical engineering at Binghamton University.

Spiders can detect miniscule movements and vibrations via sensory organs in their tarsi – claws at the tips of their legs they use to grasp their webs, Hoy said. Orb weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used a special quiet room without vibrations or air flows at Binghamton University. They had an orb-weaver build a web inside a rectangular frame, so they could position it where they wanted. The team began by putting a mini-speaker within millimeters of the web without actually touching it, where sound operates as a mechanical vibration. They found the spider detected the mechanical vibration and moved in response.

They then placed a large speaker 3 meters away on the other side of the room from the frame with the web and spider, beyond the range where mechanical vibration could affect the web. A laser vibrometer was able to show the vibrations of the web from excited air particles.

The team then placed the speaker in different locations, to the right, left and center with respect to the frame. They found that the spider not only detected the sound, it turned in the direction of the speaker when it was moved. Also, it behaved differently based on the volume, by crouching or flattening out.

Future experiments may investigate whether spiders rebuild their webs, sometimes daily, in part to alter their acoustic capabilities, by varying a web’s geometry or where it is anchored. Also, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies, Hoy said.

Additionally, the team would like to test if other types of web-weaving spiders also use their silk to outsource their hearing. “The potential is there,” Hoy said.

Miles’ lab is using tiny fiber strands bio-inspired by spider silk to design highly sensitive microphones that – unlike conventional pressure-based microphones – pick up all frequencies and cancel out background noise, a boon for hearing aids.  

Here’s a link to and a citation for the paper,

Outsourced hearing in an orb-weaving spider that uses its web as an auditory sensor by Jian Zhou, Junpeng Lai, Gil Menda, Jay A. Stafstrom, Carol I. Miles, Ronald R. Hoy, and Ronald N. Miles. Proceedings of the National Academy of Sciences (PNAS) DOI: Published March 29, 2022 | 119 (14) e2122789119

This paper appears to be open access and video/audio files are included (you can heat the sound and watch the spider respond).

One of world’s most precise microchip sensors thanks to nanotechnology, machine learning, extended cognition, and spiderwebs

I love science stories about the inspirational qualities of spiderwebs. A November 26, 2021 news item on describes how spiderwebs have inspired advances in sensors and, potentially, quantum computing,,

A team of researchers from TU Delft [Delft University of Technology; Netherlands] managed to design one of the world’s most precise microchip sensors. The device can function at room temperature—a ‘holy grail’ for quantum technologies and sensing. Combining nanotechnology and machine learning inspired by nature’s spiderwebs, they were able to make a nanomechanical sensor vibrate in extreme isolation from everyday noise. This breakthrough, published in the Advanced Materials Rising Stars Issue, has implications for the study of gravity and dark matter as well as the fields of quantum internet, navigation and sensing.

Inspired by nature’s spider webs and guided by machine learning, Richard Norte (left) and Miguel Bessa (right) demonstrate a new type of sensor in the lab. [Photography: Frank Auperlé]

A November 24, 2021 TU Delft press release (also on EurekAlert but published on November 23, 2021), which originated the news item, describes the research in more detail,

One of the biggest challenges for studying vibrating objects at the smallest scale, like those used in sensors or quantum hardware, is how to keep ambient thermal noise from interacting with their fragile states. Quantum hardware for example is usually kept at near absolute zero (−273.15°C) temperatures, with refrigerators costing half a million euros apiece. Researchers from TU Delft created a web-shaped microchip sensor which resonates extremely well in isolation from room temperature noise. Among other applications, their discovery will make building quantum devices much more affordable.

Hitchhiking on evolution
Richard Norte and Miguel Bessa, who led the research, were looking for new ways to combine nanotechnology and machine learning. How did they come up with the idea to use spiderwebs as a model? Richard Norte: “I’ve been doing this work already for a decade when during lockdown, I noticed a lot of spiderwebs on my terrace. I realised spiderwebs are really good vibration detectors, in that they want to measure vibrations inside the web to find their prey, but not outside of it, like wind through a tree. So why not hitchhike on millions of years of evolution and use a spiderweb as an initial model for an ultra-sensitive device?” 

Since the team did not know anything about spiderwebs’ complexities, they let machine learning guide the discovery process. Miguel Bessa: “We knew that the experiments and simulations were costly and time-consuming, so with my group we decided to use an algorithm called Bayesian optimization, to find a good design using few attempts.” Dongil Shin, co-first author in this work, then implemented the computer model and applied the machine learning algorithm to find the new device design. 

Microchip sensor based on spiderwebs
To the researcher’s surprise, the algorithm proposed a relatively simple spiderweb out of 150 different spiderweb designs, which consists of only six strings put together in a deceivingly simple way. Bessa: “Dongil’s computer simulations showed that this device could work at room temperature, in which atoms vibrate a lot, but still have an incredibly low amount of energy leaking in from the environment – a higher Quality factor in other words. With machine learning and optimization we managed to adapt Richard’s spider web concept towards this much better quality factor.”

Based on this new design, co-first author Andrea Cupertino built a microchip sensor with an ultra-thin, nanometre-thick film of ceramic material called Silicon Nitride. They tested the model by forcefully vibrating the microchip ‘web’ and measuring the time it takes for the vibrations to stop. The result was spectacular: a record-breaking isolated vibration at room temperature. Norte: “We found almost no energy loss outside of our microchip web: the vibrations move in a circle on the inside and don’t touch the outside. This is somewhat like giving someone a single push on a swing, and having them swing on for nearly a century without stopping.”

Implications for fundamental and applied sciences
With their spiderweb-based sensor, the researchers’ show how this interdisciplinary strategy opens a path to new breakthroughs in science, by combining bio-inspired designs, machine learning and nanotechnology. This novel paradigm has interesting implications for quantum internet, sensing, microchip technologies and fundamental physics: exploring ultra-small forces for example, like gravity or dark matter which are notoriously difficult to measure. According to the researchers, the discovery would not have been possible without the university’s Cohesion grant, which led to this collaboration between nanotechnology and machine learning.

Here’s a link to and a citation for the paper,

Spiderweb Nanomechanical Resonators via Bayesian Optimization: Inspired by Nature and Guided by Machine Learning by Dongil Shin, Andrea Cupertino, Matthijs H. J. de Jong, Peter G. Steeneken, Miguel A. Bessa, Richard A. Norte. Advanced Materials Volume34, Issue3 January 20, 2022 2106248 DOI: First published (online): 25 October 2021

This paper is open access.

If spiderwebs can be sensors, can they also think?

it’s called ‘extended cognition’ or ‘extended mind thesis’ (Wikipedia entry) and the theory holds that the mind is not solely in the brain or even in the body. Predictably, the theory has both its supporters and critics as noted in Joshua Sokol’s article “The Thoughts of a Spiderweb” originally published on May 22, 2017 in Quanta Magazine (Note: Links have been removed),

Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it. Then they began building horizontal nets to use as a fishing platform. Now their modern descendants, the cobweb spiders, dangle sticky threads below, wait until insects walk by and get snagged, and reel their unlucky victims in.

In 2008, the researcher Hilton Japyassú prompted 12 species of orb spiders collected from all over Brazil to go through this transition again. He waited until the spiders wove an ordinary web. Then he snipped its threads so that the silk drooped to where crickets wandered below. When a cricket got hooked, not all the orb spiders could fully pull it up, as a cobweb spider does. But some could, and all at least began to reel it in with their two front legs.

Their ability to recapitulate the ancient spiders’ innovation got Japyassú, a biologist at the Federal University of Bahia in Brazil, thinking. When the spider was confronted with a problem to solve that it might not have seen before, how did it figure out what to do? “Where is this information?” he said. “Where is it? Is it in her head, or does this information emerge during the interaction with the altered web?”

In February [2017], Japyassú and Kevin Laland, an evolutionary biologist at the University of Saint Andrews, proposed a bold answer to the question. They argued in a review paper, published in the journal Animal Cognition, that a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.

This would make the web a model example of extended cognition, an idea first proposed by the philosophers Andy Clark and David Chalmers in 1998 to apply to human thought. In accounts of extended cognition, processes like checking a grocery list or rearranging Scrabble tiles in a tray are close enough to memory-retrieval or problem-solving tasks that happen entirely inside the brain that proponents argue they are actually part of a single, larger, “extended” mind.

Among philosophers of mind, that idea has racked up citations, including supporters and critics. And by its very design, Japyassú’s paper, which aims to export extended cognition as a testable idea to the field of animal behavior, is already stirring up antibodies among scientists. …

It seems there is no definitive answer to the question of whether there is an ‘extended mind’ but it’s an intriguing question made (in my opinion) even more so with the spiderweb-inspired sensors from TU Delft.

Regenerative architecture and Michael Pawlyn (a keynote speaker at Vancouver’s [Canada] Zero Waste Conference)

Michael Pawlyn who founded Exploration Architecture, an architectural practice with a focus on regenerative design will be in Vancouver during the Zero Waste Conference, September 28 -29, 2022. A keynote speaker (from his speaker’s page),

Michael Pawlyn has been described as an expert in regenerative design and biomimicry. He established his firm Exploration Architecture in 2007 to focus on high performance buildings and solutions for the circular economy.

Prior to setting up Exploration, he worked with Grimshaw for ten years and was central to the team that designed the Eden Project.

Michael jointly initiated the widely acclaimed Sahara Forest Project. In 2019, he co-initiated ‘Architects Declare a Climate & Biodiversity Emergency’ which has spread internationally with over 7,000 companies signed up to addressing the planetary crisis.

Since 2018 he has been increasingly providing advice to national governments and large companies on transformative change. He is the author of two books, Biomimicry in Architecture and Flourish: Design Paradigms for Our Planetary Emergency, co-authored with Sarah Ichioka.

You can find out more about Pawlyn and biomimicry in a November 17, 2011 interviewe by Karissa Rosenfield for archdaily,

Why were you drawn to biomimicry? As a teenager I was torn between studying architecture and biology and eventually chose the former. I was also quite politicized about environmental issues in my early teens after a relative gave me a copy of the Club of Rome’s “Blueprint for Survival”. When I joined Grimshaw to work on the Eden Project, I realized that there was a way to bring these strands together in pursuit of sustainable architecture inspired by nature.

What are some of the most interesting examples, apart from the Eden Project, of existing architecture that uses biomimicry as its guiding principle? Pier Luigi Nervi’s Palazzetto dello Sport, an indoor arena in Rome, is a masterpiece of efficiency inspired by giant Amazon water lilies. Many of Nervi’s projects were won in competitions and the secret to his success was his ability to produce the most cost-effective schemes. In a satisfying parallel with the refining process of evolution, the combination of ingenuity and biomimicry led to a remarkable efficiency of resources.

The Eastgate Centre in Harare, Zimbabwe by Mick Pearce, is based on termite mounds. It manages to create comfortable conditions for the people inside without air-conditioning in a tropical environment.

If you’re curious about the conference, it’s the 2022 Zero Waste Conference—A Future Without Waste: Regenerative and waste-free by design on September 28 & 29 in Vancouver, BC.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”


You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]


My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

it was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.


My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″


1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from]


This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

Spiky materials that can pop bacteria?

Bacteria interacting with four different topographies Courtesy: Imperial College London

A February 9, 2022 news item on describes some bioinspired research that could help cut down on the use of disinfectants,

Researchers have created intricately patterned materials that mimic antimicrobial, adhesive and drag reducing properties found in natural surfaces.

The team from Imperial College London found inspiration in the wavy and spiky surfaces found in insects, including on cicada and dragonfly wings, which ward off bacteria.

They hope the new materials could be used to create self-disinfecting surfaces and offer an alternative to chemically functionalized surfaces and cleaners, which can promote the growth of antibiotic-resistant bacteria.

A February 9, 2022 Imperial College London (ICL) press release by Caroline Brogan, which originated the news item, describes the work in more technical detail,

The tiny waves, which overlap at defined angles to create spikes and ripples, could also help to reduce drag on marine transport by mimicking shark skin, and to enhance the vibrancy of color without needing pigment, by mimicking insects.

Senior author Professor Joao Cabral, of Imperial’s Department of Chemical Engineering, said, “It’s inspiring to see in miniscule detail how the wings and skins of animals help them master their environments. Animals evolved wavy surfaces to kill bacteria, enhance color, and reduce drag while moving through water. We’re borrowing these natural tricks for the very same purposes, using a trick reminiscent of a Fourier wave superposition.”

Spiky structures

Researchers created the new materials by stretching and compressing a thin, soft, sustainable plastic resembling clingfilm to create three-dimensional nano- and microscale wavy patterns, compatible with sustainable and biodegradable polymers. 

The spiky structure was inspired by the way insects and fish have evolved to interact with their environments. The corrugated ripple effect is seen in the wings of cicadas and dragonflies, whose surfaces are made of tiny spikes which pop bacterial cells to keep the insects clean.  

The structure could also be applied to ships to reduce drag and boost efficiency – an application inspired by shark skin, which contains nanoscale horizontal ridges to reduce friction and drag.

Another application is in producing vibrant colours like those seen in the wings of morpho blue butterflies, whose cells are arranged to reflect and bend light into a brilliant blue without using pigment. Known as structural colour, other examples include the blue in peacock feathers, the shells of iridescent beetles, and blue human eyes.

Scaling up waves

To conduct the research, which is published in Physical Review Letters, the researchers studied specimens of cicadas and dragonflies from the Natural History Museum, and sedimentary deposits and rock formations documented by Trinity College Dublin.

They discovered that they could recreate these naturally occurring surface waves by stretching and then relaxing thin polymer skins in precise directions at the nanoscale.

While complex patterns can be fabricated by lithography and other methods, for instance in silicon microchip production, these are generally prohibitively expensive to use over large areas. This new technique, on the other hand, is ready to be scaled up relatively inexpensively if confirmed to be effective and robust. 

Potential applications include self-disinfecting surfaces in hospitals, schools, public transport, and food manufacturing. They could even help keep medical implants clean, which is important as these can host networks of bacterial matter known as biofilms that are notoriously difficult to kill. 

Naturally occurring wave patterns are also seen in the wrinkling of the human brain and fingertips as well as the ripples in sand beds. First author Dr Luca Pellegrino from the Department of Chemical Engineering, said: “The idea is compelling because it is simple: by mimicking the surface waves found in nature, we can create a palette of patterns with important applications. Through this work we can also learn more about the possible origins of these natural forms – a field called morphogenesis.” 

he next focus for the team is to test the effectiveness and robustness of the material in real-world settings, like on bus surfaces. The researchers hope it can contribute to solutions to surface cleanliness that are not reliant on chemical cleaners. To this end, they have been awarded a €5.4million EU HORIZON grant with collaborators ranging from geneticists at KU Leuven to a bus manufacturer to develop sustainable and robust antimicrobial surfaces for high traffic contexts. 

Here’s a link (the press release also has a link) to and a citation for the paper,

Ripple Patterns Spontaneously Emerge through Sequential Wrinkling Interference in Polymer Bilayers by Luca Pellegrino, Annabelle Tan, and João T. Cabral. Phys. Rev. Lett. 128, 058001 Vol. 128, Issue 5 — 4 February 2022 Published online 2 February 2022

This paper is behind a paywall.

This work reminds me of Sharklet, a company that was going to produce materials that mimicked the structure of sharkskin. Apparently, sharks have nanostructures on their skin which prevents bacteria and more from finding a home there.

Can you make my nose more like a camel’s?

Camel Face Close Up [downloaded from]

I love that image which I found on Alexey Sergeev’s Camel Close Up webpage on his eponymous website. It turns out the photographer is in the Department of Mathematics at Texas A&M University. Thank you Mr. Sergeev.

A January 19, 2022 news item on Nanowerk describes research inspired by a camel’s nose, Note: A link has been removed,

Camels have a renowned ability to survive on little water. They are also adept at finding something to drink in the vast desert, using noses that are exquisite moisture detectors.

In a new study in ACS [American Chemical Society] Nano (“A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability”), researchers describe a humidity sensor inspired by the structure and properties of camels’ noses. In experiments, they found this device could reliably detect variations in humidity in settings that included industrial exhaust and the air surrounding human skin.

A January 19, 2022 ACS news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Humans sometimes need to determine the presence of moisture in the air, but people aren’t quite as skilled as camels at sensing water with their noses. Instead, people must use devices to locate water in arid environments, or to identify leaks or analyze exhaust in industrial facilities. However, currently available sensors all have significant drawbacks. Some devices may be durable, for example, but have a low sensitivity to the presence of water. Meanwhile, sunlight can interfere with some highly sensitive detectors, making them difficult to use outdoors, for example. To devise a durable, intelligent sensor that can detect even low levels of airborne water molecules, Weiguo Huang, Jian Song, and their colleagues looked to camels’ noses. 

Narrow, scroll-like passages within a camel’s nose create a large surface area, which is lined with water-absorbing mucus. To mimic the high-surface-area structure within the nose, the team created a porous polymer network. On it, they placed moisture-attracting molecules called zwitterions to simulate the property of mucus to change capacitance as humidity varies. In experiments, the device was durable and could monitor fluctuations in humidity in hot industrial exhaust, find the location of a water source and sense moisture emanating from the human body. Not only did the sensor respond to changes in a person’s skin perspiration as they exercised, it detected the presence of a human finger and could even follow its path in a V or L shape. This sensitivity suggests that the device could become the basis for a touchless interface through which someone could communicate with a computer, according to the researchers. What’s more, the sensor’s electrical response to moisture can be tuned or adjusted, much like the signals sent out by human neurons — potentially allowing it to learn via artificial intelligence, they say. 

The authors acknowledge funding from the Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, the Natural Science Foundation of Fujian Province, and the National Natural Science Foundation of China.

Here’s a link to and a citation for the paper,

A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability by Caicong Li, Jie Liu, Hailong Peng, Yuan Sui, Jian Song, Yang Liu, Wei Huang, Xiaowei Chen, Jinghui Shen, Yao Ling, Chongyu Huang, Youwei Hong, and Weiguo Huang. ACS Nano 2022, 16, 1, 1511–1522 DOI: Publication Date:December 15, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Xenobots (living robots) that can reproduce

Xenobots (living robots made from African frog (Xenopus laevis) frog cells) can now self-replicate. First mentioned here in a June 21, 2021 posting, xenobots have captured the imagination of various media outlets including the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks radio programme and blog where Amanda Buckiewicz posted a December 3, 2021 article about the latest xenobot development (Note: Links have been removed),

In a new study, Bongard [Joshua Bongard, a computer scientist at the University of Vermont] and his colleagues from Tufts University and Harvard’s Wyss Institute for Biologically Inspired Engineering found that the xenobots would autonomously collect loose single cells in their environment, gathering hundreds of cells together until new xenobots had formed.

“This took a little bit for us to wrap our minds around,” he said. “There’s no programming here. Instead, we’re designing or shaping these xenobots, and what they do, the way they behave, is based on shape.”

“We take a couple of thousand of those frog cells and we squish them together into a ball and put that in the bottom of a petri dish,” Bongard told Quirks & Quarks host Bob McDonald. 

“If you were to look into the dish, you would see some very small, what look like specks of pepper, moving about in the bottom of the petri dish.”

The xenobots initially received no instruction from humans on how to replicate. But when researchers added extra cells to the dish containing xenobots, they observed that the xenobots would assemble them into piles.

“Cells early in development are sticky,” said Bongard. “If the pile is large enough and the cells stick together, the outer ones on the surface will grow very small hairs, which are called cilia. And eventually, after four days, those cilia will start to beat back and forth like flexible oars, and the pile will start moving.”

“And that’s a child xenobot.” 

A November 29, 2021 Wyss Institute news release by Joshua Brown describes the process a little differently,

To persist, life must reproduce. Over billions of years, organisms have evolved many ways of replicating, from budding plants to sexual animals to invading viruses.

Now scientists at the University of Vermont, Tufts University, and the Wyss Institute for Biologically Inspired Engineering at Harvard University have discovered an entirely new form of biological reproduction—and applied their discovery to create the first-ever, self-replicating living robots.

The same team that built the first living robots (“Xenobots,” assembled from frog cells—reported in 2020) has discovered that these computer-designed and hand-assembled organisms can swim out into their tiny dish, find single cells, gather hundreds of them together, and assemble “baby” Xenobots inside their Pac-Man-shaped “mouth”—that, a few days later, become new Xenobots that look and move just like themselves.

And then these new Xenobots can go out, find cells, and build copies of themselves. Again and again.

In a Xenopus laevis frog, these embryonic cells would develop into skin. “They would be sitting on the outside of a tadpole, keeping out pathogens and redistributing mucus,” says Michael Levin, Ph.D., a professor of biology and director of the Allen Discovery Center at Tufts University and co-leader of the new research. “But we’re putting them into a novel context. We’re giving them a chance to reimagine their multicellularity.” Levin is also an Associate Faculty member at the Wyss Institute.

And what they imagine is something far different than skin. “People have thought for quite a long time that we’ve worked out all the ways that life can reproduce or replicate. But this is something that’s never been observed before,” says co-author Douglas Blackiston, Ph.D., the senior scientist at Tufts University and the Wyss Institute who assembled the Xenobot “parents” and developed the biological portion of the new study.

“This is profound,” says Levin. “These cells have the genome of a frog, but, freed from becoming tadpoles, they use their collective intelligence, a plasticity, to do something astounding.” In earlier experiments, the scientists were amazed that Xenobots could be designed to achieve simple tasks. Now they are stunned that these biological objects—a computer-designed collection of cells—will spontaneously replicate. “We have the full, unaltered frog genome,” says Levin, “but it gave no hint that these cells can work together on this new task,” of gathering and then compressing separated cells into working self-copies.

“These are frog cells replicating in a way that is very different from how frogs do it. No animal or plant known to science replicates in this way,” says Sam Kriegman, Ph.D.,  the lead author on the new study, who completed his Ph.D. in Bongard’s lab at UVM and is now a post-doctoral researcher at Tuft’s Allen Center and Harvard University’s Wyss Institute for Biologically Inspired Engineering.

Both Buckiewicz’s December 3, 2021 article and Brown’s November 29, 2021 Wyss Institute news release are good reads with liberal used of embedded images. If you have time, start with Buckiewicz as she provides a good introduction and follow up with Brown who gives more detail and has an embedded video of a December 1, 2021 panel discussion with the scientists behind the xenobots.

Here’s a link to and a citation for the latest paper,

Kinematic self-replication in reconfigurable organisms by Sam Kriegman, Douglas Blackiston, Michael Levin, and Josh Bongard. PNAS [Proceedings of the National Academy of Sciences] December 7, 2021 118 (49) e2112672118;

This paper appears to be open access.

Organic neuromorphic electronics

A December 13, 2021 news item on ScienceDaily describes some research from Germany’s Max Planck Institute for Polymer Research,

The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.

A December 13, 2021 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, fills in a few details,

The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.

Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].

“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.

To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit. 

“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.

Here’s a link to and a citation for the paper,

Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068

This paper is open access.

Neuromorphic (brainlike) computing inspired by sea slugs

The sea slug has taught neuroscientists the intelligence features that any creature in the animal kingdom needs to survive. Now, the sea slug is teaching artificial intelligence how to use those strategies. Pictured: Aplysia californica. (Image by NOAA Monterey Bay National Marine Sanctuary/Chad King.)

I don’t think I’ve ever seen a picture of a sea slug before. Its appearance reminds me of its terrestrial cousin.

As for some of the latest news on brainlike computing, a December 7, 2021 news item on Nanowerk makes an announcement from the Argonne National Laboratory (a US Department of Energy laboratory; Note: Links have been removed),

A team of scientists has discovered a new material that points the way toward more efficient artificial intelligence hardware for everything from self-driving cars to surgical robots.

For artificial intelligence (AI) to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, published in the Proceedings of the National Academy of Sciences [PNAS] (“Neuromorphic learning with Mott insulator NiO”), was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The team used the resources of the Advanced Photon Source (APS), a DOE Office of Science user facility at Argonne.

A December 6, 2021 Argonne National Laboratory news release (also on EurekAlert) by Kayla Wiles and Andre Salles, which originated the news item, provides more detail,

“Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism’s survival,” said Shriram Ramanathan, a Purdue professor of Materials Engineering. ​“We want to take advantage of that mature intelligence in animals to accelerate the development of AI.”

Two main signs of intelligence that neuroscientists have learned from sea slugs are habituation and sensitization. Habituation is getting used to a stimulus over time, such as tuning out noises when driving the same route to work every day. Sensitization is the opposite — it’s reacting strongly to a new stimulus, like avoiding bad food from a restaurant.

AI has a really hard time learning and storing new information without overwriting information it has already learned and stored, a problem that researchers studying brain-inspired computing call the ​“stability-plasticity dilemma.” Habituation would allow AI to ​“forget” unneeded information (achieving more stability) while sensitization could help with retaining new and important information (enabling plasticity).

In this study, the researchers found a way to demonstrate both habituation and sensitization in nickel oxide, a quantum material. Quantum materials are engineered to take advantage of features available only at nature’s smallest scales, and useful for information processing. If a quantum material could reliably mimic these forms of learning, then it may be possible to build AI directly into hardware. And if AI could operate both through hardware and software, it might be able to perform more complex tasks using less energy.

“We basically emulated experiments done on sea slugs in quantum materials toward understanding how these materials can be of interest for AI,” Ramanathan said.

Neuroscience studies have shown that the sea slug demonstrates habituation when it stops withdrawing its gill as much in response to tapping. But an electric shock to its tail causes its gill to withdraw much more dramatically, showing sensitization.

For nickel oxide, the equivalent of a ​“gill withdrawal” is an increased change in electrical resistance. The researchers found that repeatedly exposing the material to hydrogen gas causes nickel oxide’s change in electrical resistance to decrease over time, but introducing a new stimulus like ozone greatly increases the change in electrical resistance.

Ramanathan and his colleagues used two experimental stations at the APS to test this theory, using X-ray absorption spectroscopy. A sample of nickel oxide was exposed to hydrogen and oxygen, and the ultrabright X-rays of the APS were used to see changes in the material at the atomic level over time.

“Nickel oxide is a relatively simple material,” said Argonne physicist Hua Zhou, a co-author on the paper who worked with the team at beamline 33-ID. ​“The goal was to use something easy to manufacture, and see if it would mimic this behavior. We looked at whether the material gained or lost a single electron after exposure to the gas.”

The research team also conducted scans at beamline 29-ID, which uses softer X-rays to probe different energy ranges. While the harder X-rays of 33-ID are more sensitive to the ​“core” electrons, those closer to the nucleus of the nickel oxide’s atoms, the softer X-rays can more readily observe the electrons on the outer shell. These are the electrons that define whether a material is conductive or resistive to electricity.

“We’re very sensitive to the change of resistivity in these samples,” said Argonne physicist Fanny Rodolakis, a co-author on the paper who led the work at beamline 29-ID. ​“We can directly probe how the electronic states of oxygen and nickel evolve under different treatments.”

Physicist Zhan Zhang and postdoctoral researcher Hui Cao, both of Argonne, contributed to the work, and are listed as co-authors on the paper. Zhang said the APS is well suited for research like this, due to its bright beam that can be tuned over different energy ranges.

For practical use of quantum materials as AI hardware, researchers will need to figure out how to apply habituation and sensitization in large-scale systems. They also would have to determine how a material could respond to stimuli while integrated into a computer chip.

This study is a starting place for guiding those next steps, the researchers said. Meanwhile, the APS is undergoing a massive upgrade that will not only increase the brightness of its beams by up to 500 times, but will allow for those beams to be focused much smaller than they are today. And this, Zhou said, will prove useful once this technology does find its way into electronic devices.

“If we want to test the properties of microelectronics,” he said, ​“the smaller beam that the upgraded APS will give us will be essential.”

In addition to the experiments performed at Purdue and Argonne, a team at Rutgers University performed detailed theory calculations to understand what was happening within nickel oxide at a microscopic level to mimic the sea slug’s intelligence features. The University of Georgia measured conductivity to further analyze the material’s behavior.

A version of this story was originally published by Purdue University

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

You can find the September 24, 2021 Purdue University story, Taking lessons from a sea slug, study points to better hardware for artificial intelligence here.

Here’s a link to and a citation for the paper,

Neuromorphic learning with Mott insulator NiO by Zhen Zhang, Sandip Mondal, Subhasish Mandal, Jason M. Allred, Neda Alsadat Aghamiri, Alireza Fali, Zhan Zhang, Hua Zhou, Hui Cao, Fanny Rodolakis, Jessica L. McChesney, Qi Wang, Yifei Sun, Yohannes Abate, Kaushik Roy, Karin M. Rabe, and Shriram Ramanathan. PNAS September 28, 2021 118 (39) e2017239118 DOI:

This paper is behind a paywall.