Tag Archives: Neri Oxman

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the avtsim.com/weak-ai-strong-ai webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.

….

My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on mindmatters.ai.)

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from https://en.wikipedia.org/wiki/Vancouver_Art_Gallery#cite_note-canen-7]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 duckduckgo.com search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″

noun

1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the cosmolocal.org website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the cyberneticzoo.com September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from https://cyberneticzoo.com/cyberneticanimals/1949-wieners-moth-wiener-wiesner-singleton/]

Sculpture

This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from https://www.media.mit.edu/projects/synthetic-apiary/overview/]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

A 3D printed eye cornea and a 3D printed copy of your brain (also: a Brad Pitt connection)

Sometimes it’s hard to keep up with 3D tissue printing news. I have two news bits, one concerning eyes and another concerning brains.

3D printed human corneas

A May 29, 2018 news item on ScienceDaily trumpets the news,

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today [May 29, 2018] in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Here are the proud researchers with their cornea,

Caption: Dr. Steve Swioklo and Professor Che Connon with a dyed cornea. Credit: Newcastle University, UK

A May 30,2018 Newcastle University press release (also on EurekAlert but published on May 29, 2018), which originated the news item, adds more details,

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel – a combination of alginate and collagen – keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Here’s a link to and a citation for the paper,

3D bioprinting of a corneal stroma equivalent by Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye Research Volume 173, August 2018, Pages 188–193 and 2018 May 14 pii: S0014-4835(18)30212-4. doi: 10.1016/j.exer.2018.05.010. [Epub ahead of print]

This paper is behind a paywall.

A 3D printed copy of your brain

I love the title for this May 30, 2018 Wyss Institute for Biologically Inspired Engineering news release: Creating piece of mind by Lindsay Brownell (also on EurekAlert),

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI [magnetic resonance imaging] and CT [computed tomography] scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration – which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, [emphasis mine] Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany – is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

Here’s an image illustrating the work,

Caption: This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors. Credit: Wyss Institute at Harvard University

Here’s a link to and a citation for the paper,

From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets by Ahmed Hosny , Steven J. Keating, Joshua D. Dilley, Beth Ripley, Tatiana Kelil, Steve Pieper, Dominik Kolb, Christoph Bader, Anne-Marie Pobloth, Molly Griffin, Reza Nezafat, Georg Duda, Ennio A. Chiocca, James R.. Stone, James S. Michaelson, Mason N. Dean, Neri Oxman, and James C. Weaver. 3D Printing and Additive Manufacturing http://doi.org/10.1089/3dp.2017.0140 Online Ahead of Print:May 29, 2018

This paper appears to be open access.

A tangential Brad Pitt connection

It’s a bit of Hollywood gossip. There was some speculation in April 2018 that Brad Pitt was dating Dr. Neri Oxman highlighted in the Wyss Institute news release. Here’s a sample of an April 13, 2018 posting on Laineygossip (Note: A link has been removed),

It took him a long time to date, but he is now,” the insider tells PEOPLE. “He likes women who challenge him in every way, especially in the intellect department. Brad has seen how happy and different Amal has made his friend (George Clooney). It has given him something to think about.”

While a Pitt source has maintained he and Oxman are “just friends,” they’ve met up a few times since the fall and the insider notes Pitt has been flying frequently to the East Coast. He dropped by one of Oxman’s classes last fall and was spotted at MIT again a few weeks ago.

Pitt and Oxman got to know each other through an architecture project at MIT, where she works as a professor of media arts and sciences at the school’s Media Lab. Pitt has always been interested in architecture and founded the Make It Right Foundation, which builds affordable and environmentally friendly homes in New Orleans for people in need.

“One of the things Brad has said all along is that he wants to do more architecture and design work,” another source says. “He loves this, has found the furniture design and New Orleans developing work fulfilling, and knows he has a talent for it.”

It’s only been a week since Page Six first broke the news that Brad and Dr Oxman have been spending time together.

I’m fascinated by Oxman’s (and her colleagues’) furniture. Rose Brook writes about one particular Oxman piece in her March 27, 2014 posting for TCT magazine (Note: Links have been removed),

MIT Professor and 3D printing forerunner Neri Oxman has unveiled her striking acoustic chaise longue, which was made using Stratasys 3D printing technology.

Oxman collaborated with Professor W Craig Carter and Composer and fellow MIT Professor Tod Machover to explore material properties and their spatial arrangement to form the acoustic piece.

Christened Gemini, the two-part chaise was produced using a Stratasys Objet500 Connex3 multi-colour, multi-material 3D printer as well as traditional furniture-making techniques and it will be on display at the Vocal Vibrations exhibition at Le Laboratoire in Paris from March 28th 2014.

An Architect, Designer and Professor of Media, Arts and Science at MIT, Oxman’s creation aims to convey the relationship of twins in the womb through material properties and their arrangement. It was made using both subtractive and additive manufacturing and is part of Oxman’s ongoing exploration of what Stratasys’ ground-breaking multi-colour, multi-material 3D printer can do.

Brook goes on to explain how the chaise was made and the inspiration that led to it. Finally, it’s interesting to note that Oxman was working with Stratasys in 2014 and that this 2018 brain project is being developed in a joint collaboration with Statasys.

That’s it for 3D printing today.

Synthetic Aesthetics: a book and an event (UK’s Victoria & Albert Museum) about synthetic biology and design

Sadly, I found out about the event after it took place (April 25, 2014) but I’m including it here as I think it serves a primer on putting together an imaginative art/science (art/sci) event, as well, synthetic biology is a topic I’ve covered here many times.

First, the book. Happily, it’s not too late to publicize it and, after all, that was at least one of the goals for the event. Here’s more about the book, from the UK’s Engineering and Physical Sciences Research Council April 25, 2014 news release (also on EurekAlert),

The emerging field of synthetic biology crosses the boundary between science and design, in order to design and manufacture biologically based parts, devices and systems that do not exist in the natural world, as well as the redesign of existing, natural biological systems.

This new technology has the potential to create new organisms for a variety of applications from materials to machines. What role can artists and designers play in our biological future?

This Friday [April 25, 2014], the Victoria & Albert Museum’s Friday Late turns the V&A into a living laboratory, bringing science and design together for one night of events, workshops and installations.

It will also feature the official launch of a new EPSRC-funded book ‘Synthetic Aesthetics: Investigating Synthetic Biology’s Designs on Nature’.

The book, by Alexandra Daisy Ginsberg, Jane Calvert, Pablo Schyfter, Alistair Elfick and Drew Endy, emerged from a research project ‘Sandpit: Synthetic aesthetics: connecting synthetic biology and creative design’ which was funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) and the National Science Foundation in the US.

Kedar Pandya, EPSRC’s Head of Engineering, said: “This event and the Synthetic Aesthetics book will act as a catalyst to spark informed debates and future research into how we develop and apply synthetic biology. Engineers and scientists are not divorced from the rest of society; ethical, moral and artistic questions need to be considered as we explore new science and technologies.”

The EPSRC project aimed to:

  • bring together scientists and engineers working in synthetic biology with artists and designers working in the creative industries, to develop long-lasting relationships which could help to improve their work
  • ensure aesthetic concerns and questions are reflected in the lifecycle of research projects and implementation of products, and enable inclusive and responsive technology development
  • produce new social scientific research that analyses and reflects on these interactions
  • initiate a new and expanded curriculum across both engineering and design disciplines to lead to new forms of engineering and new schools of art
  • improve synthetic biological projects, products and thus the world
  • engage and enable the full diversity of civilization’s creative resources to work with the synthetic biology community as full partners in creating and stewarding a beautifully integrated natural and engineered living world

Weirdly, the news release offered no link to the book.  Here’s the Synthetic Aesthetics: Investigating Synthetic Biology’s Designs on Nature page on the MIT Press website,

In this book, synthetic biologists, artists, designers, and social scientists investigate synthetic biology and design. After chapters that introduce the science and set the terms of the discussion, the book follows six boundary-crossing collaborations between artists and designers and synthetic biologists from around the world, helping us understand what it might mean to ‘design nature.’ These collaborations have resulted in biological computers that calculate form; speculative packaging that builds its own contents; algae that feeds on circuit boards; and a sampling of human cheeses. They raise intriguing questions about the scientific process, the delegation of creativity, our relationship to designed matter, and, the importance of critical engagement. Should these projects be considered art, design, synthetic biology, or something else altogether?

Synthetic biology is driven by its potential; some of these projects are fictions, beyond the current capabilities of the technology. Yet even as fictions, they help illuminate, question, and even shape the future of the field.

About the Authors

Alexandra Daisy Ginsberg is a London-based artist, designer, and writer.

Jane Calvert is a social scientist based in Science, Technology and Innovation Studies at the University of Edinburgh.

Pablo Schyfter is a social scientist based in Science, Technology and Innovation Studies at the University of Edinburgh.

Alistair Elfick is Codirector of the SynthSys Centre at the University of Edinburgh.

Drew Endy is a bioengineer at Stanford University and President of the BioBrick

Now for the event description from the Victoria and Albert Museum’s Friday Late series, the April 25,2014  event Synthetic Aesthetics webpage,

Synthetic Aesthetics

Friday 25 April, 18.30-22.00

Can we design life itself? The emerging field of synthetic biology crosses the boundary between science and design to manipulate the stuff of life. These new designers use life as a programmable material, creating new organisms with radical applications from materials to machines. Friday Late turns the V&A into a living laboratory, bringing science and design together for one night of events, workshops and installations, each exploring our biological future.

The evening will feature the book launch of Synthetic Aesthetics: Investigating Synthetic Biology’s Designs on Nature (MIT Press). The book marks an important point in the development of the emerging discipline of synthetic biology, sitting at the intersection between design and science. The book is a result of research funded by the UK’s Engineering and Physical Sciences Research Council and the National Science Foundation in the US.

All events are free and places are designated on a first come, first served basis, unless stated otherwise. Filming and photography will be taking place at this event.

Please note, if the Museum reaches capacity we will allow access on a one-in-one-out basis.

#FridayLate

ALL EVENING (18.30 – 21.30)

Live Lab

Spotlight Space, Grand Entrance
A functioning synthetic biology lab in the grand entrance places this experimental field front and centre within the historic home of the V&A. Conducting experiments and answering questions from visitors, the lab will be run by synthetic biologists from Imperial College London’s EPSRC National Centre for Synthetic Biology & Innovation and SynbiCITE UK Innovation and Knowledge Centre for Synthetic Biology.

No Straight Line, No True Circle

Medieval & Renaissance, Room 50a
Young artists from the Royal College of Art’s Visual Communication course explore synthetic biology through projections on the walls of the galleries. Each one takes its inspiration from the sculptures around it in a series of site-specific installations.

Xylinum Cones

Lunchroom (access via staircase L, follow signs)
What would it mean for our daily lives if we could grow our objects? Xylinum Cones presents an experimental production line that uses bacteria to grow geometric forms. Meet designers Jannis Huelsen and Stefan Schwabe and learn how they are developing a renewable cellulose composite for future industrial uses.

Selfmade

Poynter Room, Café
This film tells the story of how biologist Christina Agapakis and smell provocateur Sissel Tolaas produce human cheese. Using swabs from hands, feet, noses and armpits as starter cultures, they produce unique smelling fresh cheeses as unusual portraits of our biological lives.

Grow Your Own Ink

Lunchroom (access via staircase L, follow signs)
A workshop led by scientist Thomas Landrain and designer Marie-Sarah Adenis showing how to ‘grow your own ink’. Try out some of the steps, from the culturing of bacteria to the extraction and purification of biological pigments. Discover the marvellous properties of this one-of-a-kind ink.

Bio Logic

Architecture Landing, Room 127 (access via staircase P, follow signs)
Take a trip into the Petri dish, where microchips meet microbes, cells become computers and all is not quite as it seems. Bio Computation, a short film by David Benjamin and Hy-Fi by The Living demonstrate revolutionary design using new composite building materials at the intersection of synthetic biology, architecture, and computation.

Zero Park

Bottom of NAL staircase (staircase L) Where is the line between the natural and the artificial? Somewhere in the midst of Zero Park. Sascha Pohflepp’s installation leads you through a synthetic landscape, which poses questions about human agency in natural ecosystems.

Faber Futures: The Rhizosphere Pigment Lab

Tapestries, Room 94 (access via staircase L)
Bacteria are no longer the bane, but the birth of tapestries! Natsai Audrey Chieza creates a gallery of futurist scarves for which bacteria are the sole agent of colour transformation. In collaboration with John Ward, professor of Structural Molecular Biology, University College London.

Living Things

Fashion, Room 40
Breathing, living, ‘second skins’ change their shape and appearance as you approach. Silicon-like smart-fabrics show movement and moving patterns. The Cyborg project – led by Carlos Olguin, with Autodesk Research – explores possibilities of new software to create materials with their own ‘life’.

The Opera of Prehistoric Creatures

Raphael Gallery, Room 48a
‘Lucy’, the extinct hominid Autralopithecus Afarensis, performs an opera just for you. Marguerite Humeau recreates her vocal tract and cords to bring you the lost voice of this prehistoric creature.

Electro Magnetic Signals from Bacterial DNA

Cast Courts, Room 46a
Can we imagine what it sounds like inside the molecular structure of a DNA helix? This composition is inspired by theoretical speculation on bacteria’s ability to transmit EMF signals, played amongst the V&A’s cast collection.

Living Among Living Things

The Edwin and Susan Davies Galleries, Room 87 (access via staircase L, follow signs)
Will Carey explores how living things will replace the products and foods we use today: from packaging that produces its own drink to skincare products secreted from bespoke microbial cultures. This series of images show exotic commodities that could be normal to future generations.

Neo-Nature

Lunchroom (access via staircase L, follow signs)
Join this workshop to create your own synthetic corals and contribute to the V&A’s very own coral reef. Michail Vanis invites you to bring seemingly impossible scenarios to life and discuss their scientific and ethical implications.

Synthetic Aesthetics on Film

The Lydia and Manfred Gorvey Lecture Theatre (access via staircase L, follow signs)
18.30 – 19.00 & 20.00 – 21.45
DNA replication, Bjork, swallowable perfume… these eight films demonstrate a myriad of cultural crossovers; synthetic biology at its aesthetic finest.
Dunne & Raby – Future Foragers (2009)
Tobias Revell – New Mumbai (2012)
Lucy McRae – Swallowable Parfum (2013)
UCSD – Biopixels (2011)
Zeitguised – Comme des Organismes (2014)
Drew Berry for Bjork – Hollow (2011)
Alexandra Daisy Ginsberg and James King – E. chromi (2009)
Neri Oxman – Silk Pavilion (2013)

FROM 19.00

Synthetic Aesthetics Authors’ Panel Discussion and Book Signing

The Lydia and Manfred Gorvey Lecture Theatre (access via staircase L, follow signs)
19.00 – 20.00 (followed by book signing)
The authors of Synthetic Aesthetics pry open the circuitry of a new biology, exposing the motherboard of nature. A presentation by designer Alexandra Daisy Ginsberg will be followed by a panel discussion with members of the team behind Synthetic Aesthetics Drew Endy, Jane Calvert, Pablo Schyfter and Alistair Elfick. Chaired by The Economist’s Oliver Morton.

Blueprints for the Unknown

Learning Centre: Seminar Room 3(access via staircase L, follow signs)
19.00. 19.30, 20.00 & 20.30
What happens when science leaves the lab? Recent advances in synthetic biology mean scientists will be the architects of life, creating blueprints for living systems and organisms. Blueprints for the Unknown investigates what might happen as engineering biology meets the complex world we live in. Speakers include Koby Barhad, David Benqué, Raphael Kim and Superflux.
Blueprints for the Unknown is a project by Design Interactions Research at the Royal College of Art as part of the Studiolab research project.

DNA Extraction

Learning Centre: Art Studio(access via staircase L, follow signs)
19.00, 20.00 & 21.00
Extract your own DNA in the V&A’s popup Wetlab and chat with synthetic biologists from Imperial College London. Synthetic biology designs life at the scale of DNA, and tonight you can take the raw materials of life home with you. With thanks to Imperial College London’s EPSRC National Centre for Synthetic Biology & Innovation and SynbiCITE UK Innovation and Knowledge Centre for Synthetic Biology.

Music of the Spheres

John Madejski Garden
19.30 & 20.30 (20 minutes)
Your computer’s hard drive is nothing compared to nature’s awesome capacity to record information. Artist Charlotte Jarvis explores how DNA can be used to record things apart from genetics – such as music – in the centuries to come. With scientist Nick Goldman and composer Mira Calix, Music of the Spheres encodes music into the structure of DNA suspended in soap solution. An immersive, surprising performance introduced by Jarvis, Calix and Goldman as they release musical bubbles in the garden. This is a work in progress.

FROM 20.00

Synbio Tarot Cards

Medieval & Renaissance, Room 50b
20.00 – 20.45
Synbio tarot card readings reveal possible outcomes, both desirable and disastrous, to which science might lead us. Exploring the social, economic and political implications of synthetic biology in the cards, from dream world to dystopia.

Synthetic Aesthetics Book Contributors Talks

National Art Library (access via staircase L)
20.30 – 21.30
The new book Synthetic Aesthetics: Investigating Synthetic Biology’s Designs on Nature marks a development in the emerging discipline of synthetic biology. For the book launch, designers, artists and scientists explain how their work bridges the gap between design and science. Drop in and hear Christina Agapakis, Sascha Pohflepp, David Benjamin and Will Carey over the course of the evening with social scientists Jane Calvert and Pablo Schyfter.
(Please note coats and bags are not permitted in the Library. Please leave these items in the cloakroom on the ground floor).

This event had a specially designed programme cover,

Souvenir programme wrap designed by London-based graphic design consultancy Kellenberger–White. kellenberger-white.com

Souvenir programme wrap designed by London-based graphic design consultancy Kellenberger–White.
kellenberger-white.com

 


Having observed how very deeply concerned scientists still are over the GMO (genetically modified organisms, sometimes also called ‘Frankenfoods’) panic that occurred in the early 2000s (I think), I suspect that efforts like this are meant (at least in part) to allay fears. In any event, the powers-that-be have taken a very engaging approach to their synthetic biology efforts. As for whether or not the event lived up to expectations, I have not been able to find any reviews or commentaries about it.