Featuring more than 100 artworks, manuscripts, sound recordings and books, many on display for the first time, Animals: Art, Science and Sound explores how animals have been documented across the world over the last 2,000 years
Season of events includes musicians Cosmo Sheldrake and Cerys Matthews, wildlife photographer Hamza Yassin and ornithologist Mya-Rose Craig, also known as Birdgirl, and more
Complemented by two free displays featuring newly acquired material from animal rights activist Kim Stallwood and award-winning photographer Levon Biss
Animals: Art, Science and Sound(21 April – 28 August 2023) at the British Library reveals how the intersection of science, art and sound has been instrumental in our understanding of the natural world and continues to evolve today.
From an ancient Greek papyrus detailing the mating habits of dogs to the earliest photographs of Antarctic animals and a recording of the last Kauaʻi ʻōʻō songbird, this is the first major exhibition to explore the different ways in which animals have been written about, visualised and recorded.
Journeying through darkness, water, land and air, visitors will encounter striking artworks, handwritten manuscripts, sound recordings and printed publications that speak to contemporary debates around discovery, knowledge, conservation, climate change and extinction. Each zone also includes a bespoke, atmospheric soundscape created using recordings from the Library’s sound archive.
Featuring over 120 exhibits, highlights include:
Earliest known illustrated Arabic scientific work documenting the characteristics of animals alongside their medical uses (c. 1225)
Earliest use of the word ‘shark’ in printed English (1569) on public display for the first time
One of the earliest works on the microscopic world, Micrographia (1665) by Robert Hooke, alongside three insect portraits by photographer Levon Biss (2021) recently acquired by the British Library, which use a combination of microscopy and photography to magnify specimens collected by Charles Darwin in 1836 and Alfred Russell Wallace circa 1859
Leonardo da Vinci’s notes (1500-08) on the impact of wind on a bird in flight, on public display for the first time
One of the rarest ichthyology publications ever produced, The Fresh-Water Fishes of Great Britain (1828-38), with hand painted illustrations by Sarah Bowdich
First commercially published recording of an animal from 1910 titled Actual Bird Record Made by a Captive Nightingale (No. I) by The Gramophone Company Limited
One of the earliest examples of musical notation being used to represent the songs and calls of birds from 1650 by Athanasius Kircher
One of the earliest portable bat detectors, the Holgate Mk VI, used by amateur naturalist John Hooper during the 1960s-70s to capture some of the first sound recordings of British bats
Cam Sharp Jones, Visual Arts Curator at the British Library, said: ‘Animals have fascinated people for as long as human records exist and the desire to study and understand other animals has taken many forms, including textual and artistic works. This exhibition is a great opportunity to showcase some of the earliest textual descriptions of animals ever produced, as well as some of the most beautiful, unique and strange records of animals that are cared for by the British Library.’
Cheryl Tipp, Curator of Wildlife and Environmental Sound at the British Library, said: ‘Sound recording has allowed us to uncover aspects of animal lives that just would not have been possible using textual or visual methods alone. It has been used to reclassify species, locate previously unknown populations and allowed us to eavesdrop on worlds that would otherwise be inaudible to our ears. It is such an emotive medium and I hope visitors will be inspired to explore the Library’s collections, as well as tune in to the sounds of the natural world in their everyday lives.’
[Note: All of the events have taken place.] There is a season of in-person and online events inspired by the exhibition, such as a Late at the Library with musician, composer and producer Cosmo Sheldrake hosted by musician, author and broadcaster Cerys Matthews and Animal Magic: A Night of Wild Enchantment where five speakers, including wildlife cameraman, ornithologist and Strictly Come Dancing winner Hamza Yassin and birder, environmentalist and diversity activist, Mya-Rose Craig, each have 15 minutes to tell a story. There is a family event on Earth Day 22 April where Art Fund’s The Wild Escape epic-scale digital landscape featuring children’s images of animals will be unveiled. A selection of these works are included in an outdoor exhibition around Kings Cross.
A richly illustrated publication by the British Library with interactive QR technology allowing readers to listen to sound recordings and a free trail for families and groups also accompanies the exhibition.
The exhibition is made possible with support from Getty through The Paper Project initiative and PONANT. With thanks to The American Trust for the British Library and The B.H. Breslauer Fund of the American Trust for the British Library. Audio soundscapes created by Greg Green with support from the Unlocking our Sound Heritage project, made possible by the National Lottery Heritage Fund. Scientific advice provided by ZSL (the Zoological Society of London).
Animals: Art, Science and Sound is complemented by two free displays at the British Library. Animal Rights: From the Margins to the Mainstream (7 May – 9 July 2023) in the Treasures Gallery draws on published, handwritten and ephemeral works from the Library’s collection relating to animal welfare. It features newly acquired material collected by animal rights activist Kim Stallwood who will be in conversation at the Library about the history of animal welfare legislation. Microsculpture (12 May – 20 November 2023) showcases nine portraits by photographer Levon Biss that capture the microscopic form and evolutionary adaptions of insects in striking large-format, high-resolution detail.
Animals: Art, Science and Sound draws on the British Library’s role as home to the UK’s national sound archive, one of the largest collections of sound recordings in the world. With over 6.5 million items of speech, music and wildlife, this includes audio from the advent of recording to the present day, and over 70,000 recordings are freely available online at sounds.bl.uk and in the British Library’s Sound Gallery in St Pancras.
Opening on 2 June , Digital Storytelling features publications that use new technologies to reimagine reading experiences
Visitors will discover a range of digital stories, on display together for the first time, including four-time BAFTA nominated 80 Days, an interactive adaptation of Jules Verne’s Around the World in Eighty Days, and the exclusive public preview of Windrush Tales, the world’s first interactive narrative game based on the experiences of Caribbean immigrants in post-war Britain
Also on display will be interactive media providing insights into the lived stories behind historical events, from the 2011 Egyptian uprising in A Dictionary of the Revolution to a moving account of the loss of a relative in the Manchester Arena Bombing in c ya laterrrr.
The British Library has announced it will be opening a new exhibition, Digital Storytelling(2 June – 15 October 2023), that explores how evolving online technologies have changed how writers write, and readers read.
The narratives featured in the exhibition will prompt visitors to consider what new possibilities emerge when they are invited as readers to become a part of the story themselves. Visitors will get to discover how technology can be used to enhance their reading experience, from Zombies, Run!, the widely popular audio fiction fitness app, to Breathe, a ghost story that “follows the reader around”, reacting to users’ real-time location data.
On display for the first time is a playable preview of Windrush Tales,the world’s first interactive narrative game based on the lived experiences of Caribbean immigrants in post-World War II Britain. The game is still in development; the preview is its first public launch, and is made exclusively available for the exhibition by 3-Fold Presents. The exhibition also premieres a new edition of This is a Picture of Wind, with a new sequence of poems inspired by Derek Jarman’s writing about his garden. This is a Picture of Wind was originally written in response to severe storms in the South West of England in 2014. [I found the attribution a little puzzling; hopefully, I haven’t added to the confusion. Note 1: This is a Picture … is a web-based project from J.R. Carpenter, see more in this January 22, 2018 posting on the IOTA Institute website ; Note 2: As for Derek Jarman, there’s this “… if modern gardening has a patron saint, it must be the English artist, filmmaker, and LGBT rights activist Derek Jarman (January 31, 1942–February 19, 1994)”; as for writing about his garden, “The record of this healing creative adventure became Jarman’s Modern Nature (public library)— part memoir and part memorial, …” both Jarman excerpts are from Maria Popova’s April 4, 2021 posting on the marginalian; Note 3: There are accounts of the 2014 storms mentioned in the IOTA posting but sources are not specified]
Items on display will also explore how writers and artists can provide an empathetic look into the lived realities behind the news. Digital Storytelling illustrates this throughA Dictionary of the Revolution, which charts the evolution of political language in Egypt during the uprising in 2011. Another work, c ya laterrrr, is an intimate autobiographical hypertext account of the loss of author Dan Hett’s brother in the 2017 Manchester Arena terrorist attack.
Visitors will also get to experience the wide-ranging possibilities of historical immersion and alternate story-worlds through these emerging formats. The exhibition will feature Astrologaster, an award-winning interactive comedy based on the archival casebooks of Elizabethan medical astrologer Simon Forman, and Clockwork Watch, a transmedia collaborative story set in a steampunk Victorian England.
Giulia Carla Rossi, Curator for Digital Publications in Contemporary British Published Collections and co-curator of the exhibition, says:
“In 2023 we’re celebrating the 50th anniversary of the British Library. Over the last half a century, digital technologies have transformed how we communicate, research and consume media – and this shift is reflected in the growth of digital stories in the Library’s ever-growing collection. In recognition of this evolution in communication, we are thrilled to present Digital Storytelling, the first exhibition of its kind at the British Library. Working closely with artists and creators, the exhibition draws on the Library’s expertise in collecting and preserving innovative online publications and reflects the rapidly evolving concept of interactive writing. At the core of all the items on display are rich narratives that are dynamic, responsive, personalised and evoke for readers the experience of getting lost in a truly immersive story.”
A season of in-person events inspired by the exhibition will feature writers, creators and academics:
Late at the Library: Digital Steampunk. Immerse yourself in the Clockwork Watch story world, party with Professor Elemental and explore 19th century London in Minecraft, Friday 13th October 2023.
As you can see two of the Digital Storytelling events have yet to take place.
This exhibit too has a fee.
You can find the British Library website here. (Click on Visit for the address and other details.) Some exhibits are free and others require a fee. I cannot find information about an all access pass, so, it looks like you’ll have to pay individual fees for the exhibits that require them. Members get free access to all exhibits.
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning. They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“, and have continued to give public talks together.
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 .
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.
A quick reminder, ARPICO stands for the Society of Italian Researchers & Professionals in Western Canada and while the upcoming speaker, Jason Halter, doesn’t seem to be Italian, his topic is quintessentially so.
From a November 5, 2021 ARPICO announcement (received via email),
After an extended break since our last (virtual) public event of last April and an unusually difficult summer, for BC in general and ARPICO in particular, we are happy to announce that our activity is restarting this fall. Our next event features a very enticing lecture presenting us with a story that neatly straddles art, science, and history, around one of the most intriguing portraits of the Renaissance, if not ever, by the great Leonardo Da Vinci. Modern day Renaissance man, designer, architect, historian and lover of Italia, Prof. Jason Halter will give us an account of his role, in collaboration with experts in other fields, in the uncovering of the so-called Earlier Mona Lisa, and verifying its authenticity. …
The lecture will take place on November 18th, 2021 at 7:00PM and will be hosted virtually, as our last few events have been. We continue to use BlueJeans as our videoconferencing platform, for which you will only require a web browser (Chrome, Firefox, Edge, Safari, Opera are all supported). Full detailed instructions on how the virtual event will unfold are available on the EventBrite listing here in the Technical Instruction section.
If participants wish to donate to ARPICO, this can be done within EventBrite; this would be greatly appreciated in order to help us continue to build upon our scholarship fund, and to defray the cost of the videoconferencing license.
The announcement goes on to provide details about the topic and the speaker,
Leonardo da Vinci’s Earlier (Isleworth) Mona Lisa:
Time Travel, Pattern Recognition, and the Scientific Method
A fascinating presentation and discussion of Da Vinci’s Earlier Mona Lisa in the context of the paper of the same title that was published in Leonardo Da Vinci’s Mona Lisa: New Perspectives, by Jean-Pierre Isbouts (Ed). Art historians have long debated the question why sources about the origin of the Mona Lisa portrait provide conflicting information. This monograph presents a solution for this quandary: those 16th-century sources don’t agree because they are not talking about the same painting. Jason Halter is one of a team of leading scholars and experts who have contributed to the veracity and authentication of this painting and the process has necessitated embracing technology and methods offered by science, which had not been uncovered before.
Design, Art & Architecture occupy a central position in the practice of Jason Halter & Wonder Inc. Having gained his formative experience under the tutelage of one of the world’s most important designers, Bruce Mau, Jason has won international acclaim for his innovative approach to design & art production. His unfettered curiosity & ability to realize ideas have made him intuit & manifest design solutions in new & novel ways.
As a Renaissance scholar, Halter spent several years teaching art & architecture of the late Gothic and early & late Renaissance in Florence and Rome, having held faculty positions with the University of Toronto & the University of British Columbia. He holds several degrees in history & architecture, & was awarded the prestigious Syracuse Fellowship during his post graduate work in Italy.
Halter was recently invited by the Mona Lisa Foundation, a consortium based in Zurich, Switzerland, to assist in the marketing and research for the ‘Earlier Mona Lisa’, 1503, by Leonardo Da Vinci. Contributing an article entitled ‘Time Travel. Pattern Recognition & the Scientific Method’, to the recent book entitled ‘Earlier Mona Lisa – New Perspectives’, published by the Fielding Graduate University, this new scholarship has established a series of insights and theories regarding this incredibly important artwork by Da Vinci, engaging new vital scientific investigation with critical cultural expertise on the work. The book was released in April 2019, ahead of an exhibition of the ‘Earlier Mona Lisa’ at Palazzo Bastogi in Florence, Italy in June of 2019, corresponding with the 500th anniversary of the passing of Leonardo da Vinci in 1519.
ARPICO offers an overview for how the night will proceed,
WHEN (EVENT): Thurs, November 18th, 2021 at 7:00PM (BlueJeans link active at 6:45PM)
WHERE: Online using the BlueJeans Conferencing platform.
The evening agenda is as follows:
6:45PM – BlueJeans Presentation link becomes active and registrants may join.
7:00pm – Start of the evening Event with introductions & lecture by Prof. Jason Halter
8:00 pm – Q & A Period via BlueJeans Chat Interface
Tickets are Needed
Tickets for this event are FREE. Due to limited seating at the venue, we ask that each household register once and watch the presentation together on a single device. You will receive the event videoconferencing invite link via email in your registration confirmation.
Can I update my registration information? Yes. If you have any questions, contact us at email@example.com
I am having trouble using EventBrite and cannot reserve my ticket(s). Can someone at ARPICO help me with my ticket reservation? Of course, simply send your ticket request to us at firstname.lastname@example.org so we help you.
I found this about the BlueJeans Conferencing platform on the ‘Leonardo da Vinci’s Earlier (Isleworth) Mona Lisa: Time Travel, Pattern Recognition, and the Scientific Method’ registration page,
The event will be managed via the videoconferencing platform BlueJeans Meetings, by clicking on the link that will be emailed to each registered individual (to the email address provided). Please, note that the event will not be active until 6:45 pm on the day of the event.
At that time, clicking on the link will automatically let you join the event via your web browser (Chrome, Firefox, Safari, Opera should all work smoothly). You are NOT required to download or install anything to your computer. The entire video stream will occur inside your web browser window just like any other website you might visit. There is no security risk or risk to your personal information. You can always join the event late, as this will not interfere with the presentation.
When you open the link you will be prompted to input a guest name. Please use your name that will allow us to identify you, and continue. On the following screen you may be prompted by your browser (depending on your settings) to allow access to use your computer’s microphone and camera. You do not need to approve these if you do not plan to talk or be seen at any time during the Q&A segment. Upon joining you should see a screen similar to the sample image seen below where the various icons superimposed on the pictures of participants will show when you hover the mouse pointer over the BlueJeans browser window.
By default, your system’s camera will be turned on and your microphone will be turned off. If you do not wish to show your face, you can of course do that by clicking on the camera icon like the one on the bottom right of the sample screenshot provided. We ask that you keep your microphone muted, since any background sounds and noises from your environment will be audible and may interfere with the speaker’s voice.
As we have done for the in-person events, we will be recording our virtual ones for future reference.
At any time during the lecture, participants will be able to post comments or questions for the speaker via the “chat” button also visible by hovering over the BlueJeans browser window. The moderator will read them for the speaker by way of a Q&A session at the end of the lecture.
In the days following the event we will be sending all participants a succinct feedback form, which we encourage you to fill in and send back to us.
A little background
It seems this talk is the outcome of a Mona Lisa Foundation initiative, which resulted in a 2019 book (mentioned earlier), Leonardo Da Vinci’s Mona Lisa: New Perspectives, by Jean-Pierre Isbouts (editor).
Few works of art have garnered as much attention from experts and the public as the ‘Mona Lisa’ in the Louvre Museum. By contrast, the ‘Earlier Mona Lisa’ has spent much of its existence hidden from view. Despite this, on the few occasions the painting has been available to be viewed, significant expert opinion has been recorded.
It is probably fair to say that attributing a painting to Leonardo da Vinci with certainty is one of the most difficult tasks in the field of Old Master paintings. To date there are about 18 to 20 paintings “more or less” attributed to Leonardo. One states “more or less” since there is not even one painting about which all the recognized Da Vinci experts agree. It is even disputed whether some parts of the famous ‘Mona Lisa’ portrait in the Louvre are not by the master. One famous expert said that attributing a painting to Leonardo is like “holding in one’s hand a burning iron rod.”
Nonetheless, it is generally accepted that he did paint all or at least the essential parts of those paintings currently attributed to him. In the case of a Da Vinci portrait, an attribution is generally agreed upon if the artist painted only the face, while some experts argue that it is even enough if Da Vinci had simply conceived the structure of the painting. It should be noted that some of his pupils and followers had great talent. The well-known ‘Lady with an Ermine’ and ‘La Belle Ferronière’ represent only recent attributions to Leonardo, having been attributed to pupils for almost 400 years.
Professor Jean-Pierre Isbouts says, “Every interpretation is subject to subsequent dispute. When you look at dating, when you look at authorship, when you look at provenance. So I think it’s just part of the world we live in that Leonardo scholarship happens to be a debating society whether you like it or not.”
The essay goes on to detail the key elements for establishing attribution and presents some contrarian views.
Jason Halter and science
I wish there was a little more detail about the science that Halter will be discussing. Halter’s science background seems to be confined to his work in architecture, which suggests material science. On the other hand, pattern recognition suggests algorithms and artificial intelligence.
As for Bruce Mau’s influence, mentioned as a colleague and mentor in Halter’s biographical details, Mau is a big deal in Canadian design circles who has an amateur’s interest (like mine) in science if his 2004 Massive Change show at the Vancouver Art Gallery is an accurate indicator. The show featured a bioengineered nose being grown in a beaker. (More about Massive Change and bioengineering in my February 21, 2013 posting.)
Happy Italian Research in the World Day! Each year since 2018 this has been celebrated on the day that Leonardo da Vinci was born over 500 years ago on April 15. It’s also the start of World Creativity and Innovation Week (WCIW), April 15 – 21, 2021 with over 80 countries (Italy, The Gambia, Mauritius, Belarus, Iceland, US, Syria, Vietnam, Indonesia, Denmark, etc.) celebrating. By the way, April 21, 2021 is the United Nations’ World Creativity and Innovation Day. Now, onto some of the latest research, coming from Italy, on art conservation.
From Los Angeles and the Lower East Side of New York City to Paris and Penang, street art by famous and not-so-famous artists adorns highways, roads and alleys. In addition to creating social statements, works of beauty and tourist attractions, street art sometimes attracts vandals who add their unwanted graffiti, which is hard to remove without destroying the underlying painting. Now, researchers report novel, environmentally friendly techniques that quickly and safely remove over-paintings on street art.
“For decades, we have focused on cleaning or restoring classical artworks that used paints designed to last centuries,” says Piero Baglioni, Ph.D., the project’s principal investigator. “In contrast, modern art and street art, as well as the coatings and graffiti applied on top, use materials that were never intended to stand the test of time.”
Research fellow Michele Baglioni, Ph.D., (no relation to Piero Baglioni) and coworkers built on their colleagues’ work and designed a nanostructured fluid, based on nontoxic solvents and surfactants, loaded in highly retentive hydrogels that very slowly release cleaning agents to just the top layer — a few microns in depth. The undesired top layer is removed in seconds to minutes, with no damage or alteration to the original painting.
Street art and overlying graffiti usually contain one or more of three classes of paint binders — acrylic, vinyl or alkyd polymers. Because these paints are similar in composition, removing the top layer frequently damages the underlying layer. Until now, the only way to remove unwanted graffiti was by using chemical cleaners or mechanical action such as scraping or sand blasting. These traditional methods are hard to control and often damaged the original art.
“We have to know exactly what is going on at the surface of the paintings if we want to design cleaners,” explains Michele Baglioni, who is at the University of Florence (Italy). “In some respects, the chemistry is simple — we are using known surfactants, solvents and polymers. The challenge is combining them in the right way to get all the properties we need.”
Michele Baglioni and coworkers used Fourier transform infrared spectroscopy to characterize the binders, fillers and pigments in the three classes of paints. After screening for suitable low-toxicity, “green” solvents and biodegradable surfactants, he used small angle X-ray scattering analyses to study the behavior of four alkyl carbonate solvents and a biodegradable nonionic surfactant in water.
The final step was formulating the nanostructured cleaning combination. The system that worked well also included 2-butanol and a readily biodegradable alkyl glycoside hydrotrope as co-solvents/co-surfactants. Hydrotropes are water-soluble, surface-active compounds used at low levels that allow more concentrated formulations of surfactants to be developed. The system was then loaded into highly retentive hydrogels and tested for its ability to remove overpaintings on laboratory mockups using selected paints in all possible combinations.
After dozens of tests, which helped determine how long the gel should be applied and removed without damaging the underlying painting, he tested the gels on a real piece of street art in Florence, successfully removing graffiti without affecting the original work.
“This is the first systematic study on the selective and controlled removal of modern paints from paints with similar chemical composition,” Michele Baglioni says. The hydrogels can also be used for the removal of top coatings on modern art that were originally intended to preserve the paintings but have turned out to be damaging. The hydrogels will become available commercially from CSGI Solutions for Conservation of Cultural Heritage, a company founded by Piero Baglioni and others. CSGI, the Center for Colloid and Surface Science, is a university consortium mainly funded through programs of the European Union.
And, there was this after the end of the news release,
ARPICO (Society of Italian Researchers & Professionals in Western Canada) is presenting a pre-celebration event to honour Italian Research in the World Day (April 15, 2021). Take special note: the event is being held the day before.
Before launching into the announcement, bravo to the organizers! ARPICO consistently offers the most comprehensive details about their events of any group that contacts me. One more thing, to date, they are the only group that have described which technology they’re using for the webcast and explicitly address any concerns about downloading software (you don’t have to) or about personal information. (Check out Technical Instruction here.)
Here are the details from ARPICO’s April 4, 2021 announcement (received via email),
We hope everyone is doing well and being safe while we attempt to outlast this pandemic. In the meanwhile, from the comfort of our homes, we hope to be able to continue to share with you informative lectures to entertain and stimulate thought.
It is our pleasure, in collaboration with the Consulate General of Italy in Vancouver, to announce that ARPICO’s next public event will be held on April 14th, 2021 at 7:00 PM, in celebration of Italian Research in the World Day. Italian Research in the World Day was instituted starting in 2018 as part of the Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo. The celebration day was chosen by government decree to be every year on April 15 on the anniversary of the birth of Leonardo da Vinci.
The main objective of the Italian Research Day in the World is to value the quality and competencies of Italian researchers abroad, but also to promote concrete actions and investments to allow Italian researchers to continue pursuing their careers in their homeland. Italy wishes to enable Italian talents to return from abroad as well as to become an attractive environment for foreign researchers.
This year we are pleased to have Professor Marco Musiani, an academic in biological sciences, share with us a lecture titled “Wolves, Livestock, and the Physical and Social Environments.” An abstract and short professional biography are provided below.
We have chosen BlueJeans as the videoconferencing platform, for which you will only require a web browser (Chrome, Firefox, Edge, Safari, Opera are all supported). Full detailed instructions on how the virtual event will unfold are available on the EventBrite listing here in the Technical Instruction section.
If participants wish to donate to ARPICO, this can be done within EventBrite; this would be greatly appreciated in order to help us continue to build upon our scholarship fund, and to defray the cost of the videoconferencing license.
We look forward to seeing everyone there.
The evening agenda is as follows:
6:45PM – BlueJeans Presentation link becomes active and registrants may join.
If you experience any technical details please email us at email@example.com and we will attempt to assist you as best we can.
7:00pm – Start of the evening Event with introductions & lecture by Prof. Marco Musiani
~8:00 pm – Q & A Period via BlueJeans Chat Interface
If you have not already done so, please register for the event by visiting the EventBrite link or RSVPing to firstname.lastname@example.org.
Wolves, Livestock, and the Physical and Social Environments
Due primarily to wolf predation on livestock (depredation), some groups oppose wolf (Canis lupus) conservation, which is an objective for large sectors of the public. Prof. Musiani’s talk will compare wolf depredation of sheep in Southern Europe to wolf depredation of beef cattle in the US and Canada, taking into account the differences in social and economic contexts. It will detail where and when wolf attacks happen, and what environmental factors promote such attacks. Livestock depredation by wolves is a cost of wolf conservation borne by livestock producers, which creates conflict between producers, wolves and organizations involved in wolf conservation and management. Compensation is the main tool used to mitigate the costs of depredation, but this tool may be limited at improving tolerance for wolves. In poorer countries compensation funds might not be available. Other lethal and nonlethal tools used to manage the problem will also be analysed. Wolf depredation may be a small economic cost to the industry, although it may be a significant cost to affected producers as these costs are not equitably distributed across the industry. Prof. Musiani maintains that conservation groups should consider the potential consequences of all of these ecological and economic trends. Specifically, declining sheep or cattle price and the steady increase in land price might induce conversion of agricultural land to rural-residential developments, which could negatively impact the whole environment via large scale habitat change and increased human presence.
Marco Musiani is a Professor in the Dept. of Biological Sciences, Faculty of Science, University of Calgary. He also has a Joint Appointment with the Faculty of Veterinary Medicine in Calgary. His lab has a strong focus on landscape ecology, molecular ecology, and wildlife conservation.
Marco is Principal Investigator on projects on caribou, elk, moose, wolves, grizzlies and other wildlife species throughout the Rocky Mountains and Foothills regions of Canada. All such projects are run together with graduate students and have applications towards impact assessment, mainly of human infrastructure.
His focus is on academic matters. However, he also serves as reviewer for research and management projects, and acted as a consultant for the Food and Agriculture Organisation of the United Nations (working on conflicts with wolves).
WHEN (EVENT): Wed, April 14th, 2021 at 7:00PM (BlueJeans link active at 6:45PM)
WHERE: Online using the BlueJeans Conferencing platform.
Tickets for this event are FREE. Due to limited seating at the venue, we ask that each household register once and watch the presentation together on a single device. You will receive the event videoconferencing invite link via email in your registration confirmation.
Can I update my registration information? Yes. If you have any questions, contact us at email@example.com
I am having trouble using EventBrite and cannot reserve my ticket(s). Can someone at ARPICO help me with my ticket reservation? Of course, simply send your ticket request to us at firstname.lastname@example.org so we help you.
It seems May 2019 is destined to be a big month where science events in Canada are concerned. I have three national science science promotion programmes, Science Odyssey, Science Rendezvous, and Pint of Science Festival Canada (part of an international effort); two local (Vancouver, Canada) events, an art/sci café from Curiosity Collider and a SciCats science communication workshop; a national/local event at Ingenium’s Canada Science and Technology Museum in Ottawa, and an international social media (Twitter) event called #Museum Week.
Science Odyssey 2019 (formerly Science and Technology Week)
In 2016 the federal Liberal government rebranded a longstanding science promotion/education programme known as Science and Technology Week to Science Odyseey and moved it from the autumn to the spring. (Should you be curious about this change, there’s a video on YouTube with Minister of Science Kirsty Duncan and Parliamentary Secretary for Science Terry Beech launching “Science Odyssey, 10 days of innovation and science discovery.” My May 10, 2016 posting provides more details about the change.)
Moving forward to the present day, the 2019 edition of Science Odyseey will run from May 4 – May 19, 2019 for a whopping16 days. The Science Odyssey website can be found here.
Once you get to the website and choose your language, on the page where you land, you’ll find if you scroll down, there’s an option to choose a location (ignore the map until after you’ve successfully chosen a location and clicked on the filter button (it took me at least twice before achieving success; this seems to be a hit and miss affair).
Once you have applied the filter, the map will change and make more sense but I liked using the text list which appears after the filter has been applied better. Should you click on the map, you will lose the filtered text list and have to start over.
Science Rendezvous 2019
I’m not sure I’d call Science Rendezvous the largest science festival in Canada (it seems to me Beakerhead might have a chance at that title) but it did start in 2008 as its Wikipedia entry mentions (Note: Links have been removed),
Science Rendezvous is the largest [emphasis mine] science festival in Canada; its inaugural event happened across the Greater Toronto Area (GTA) on Saturday, May 10, 2008. By 2011 the event had gone national, with participation from research institutes, universities, science groups and the public from all across Canada – from Vancouver to St. John’s to Inuvik. Science Rendezvous is a registered not-for-profit organization dedicated to making great science accessible to the public. The 2017 event took place on Saturday May 13 at more than 40 simultaneous venues.
This free all-day event aims to highlight and promote great science in Canada. The target audience is the general public, parents, children and youth, with an ultimate aim of improving enrollment and investment in sciences and technology in the future.
Science Rendezvous is being held on May 11, 2019 and its website can be found here.You can find events listed by province here. There are no entries for Alberta, Nunavut, or Prince Edward Island this year.
Science Rendezvous seems to have a relationship to Science Odyssey, my guess is that they are receiving funds. In any case , you may find that an event on the Science Rendezvous site is also on the Science Odyssey site or vice versa, depending on where you start.
Pint of Science Festival (Canada)
The 2019 Pint of Science Festival will be in 25 cities across Canada from May 20 – 22, 2019. Reminiscent of the Café Scientifique events (Vancouver, Canada) where science and beer are closely interlinked, so it is with the Pint of Science Festival, which has its roots in the UK. (Later, I have something about Guelph, Ontario and its ‘beery’ 2019 Pint event.)
Here’s some history about the Canadian inception and its UK progenitor. From he Pint of Science of Festival Canada website, the About Us page,
About Us Pint of Science is a non-profit organisation that brings some of the most brilliant scientists to your local pub to discuss their latest research and findings with you. You don’t need any prior knowledge, and this is your chance to meet the people responsible for the future of science (and have a pint with them). Our festival runs over a few days in May every year,but we occasionally run events during other months.
A propos de nous Pinte de Science est une organisation à but non lucratif qui amène quelques brillants scientifiques dans un bar près de chez vous pour discuter de leurs dernières recherches et découvertes avec le public. Vous n’avez besoin d’aucune connaissance préalable, et c’est l’occasion de rencontrer les responsables de l’avenir de la science (et de prendre une pinte avec eux). Notre festival se déroule sur quelques jours au mois de mai chaque année, mais nous organisons parfois quelques événements exceptionnels en dehors des dates officielles du festival.
History In 2012 Dr Michael Motskin and Dr Praveen Paul were two research scientists at Imperial College London in the UK. They started and organised an event called ‘Meet the Researchers’. It brought people affected by Parkinson’s, Alzheimer’s, motor neurone disease and multiple sclerosis into their labs to show them the kind of research they do. It was inspirational for both visitors and researchers. They thought if people want to come into labs to meet scientists, why not bring the scientists out to the people? And so Pint of Science was born. In May 2013 they held the first Pint of Science festival in just three UK cities. It quickly took off around the world and is now in nearly 300 cities. Read more here. Pint of Science Canada held its first events in 2016, a full list of locations can be found here.
L’Histoire En 2012, Dr Michael Motskin et Dr Praveen Paul étaient deux chercheurs à l’Imperial College London, au Royaume-Uni. Ils ont organisé un événement intitulé «Rencontrez les chercheurs» et ont amené les personnes atteintes de la maladie de Parkinson, d’Alzheimer, de neuropathie motrice et de sclérose en plaques dans leurs laboratoires pour leur montrer le type de recherche qu’ils menaient. C’était une source d’inspiration pour les visiteurs et les chercheurs. Ils ont pensé que si les gens voulaient se rendre dans les laboratoires pour rencontrer des scientifiques, pourquoi ne pas les faire venir dans des bars? Et ainsi est née une Pinte de Science. En mai 2013, ils ont organisé le premier festival Pinte de Science dans trois villes britanniques. Le festival a rapidement décollé dans le monde entier et se trouve maintenant dans près de 300 villes. Lire la suite ici . Pinte de Science Canada a organisé ses premiers événements en 2016. Vous trouverez une liste complète des lieux ici.
I clicked on ‘Vancouver’ and found a range of bars, dates, and topics. It’s worth checking out every topic because the title doesn’t necessarily get the whole story across. Kudos to the team putting this together. Where these things are concderned, I don’t get surprised often. Here’s how it happened, I was expecting another space travel story when I saw this title: ‘Above and beyond: planetary science’. After clicking on the arrow,
Geology isn’t just about the Earth beneath our feet. Join us for an evening out of this world to discover what we know about the lumps of rock above our heads too!
Thank you for the geology surprise. As for the international part of this festival, you can find at least one bar in Europe, Asia and Australasia, the Americas, and Africa.
Beer and Guelph (Ontario)
I also have to tip my hat to Science Borealis (Canada’s science blog aggregator) for the tweet which led me to Pint of Science Guelph and a very special beer/science ffestival announcement,
Pint of Science Guelph will be held over three nights (May 20, 21, and 22) at six different venues, and will feature twelve different speakers. Each venue will host two speakers with talks ranging from bridging the digital divide to food fraud to the science of bubbles and beer. There will also be trivia and lots of opportunity to chat with the various researchers to learn more about what they do, and why they do it.
But wait! There’s more! Pint of Science Guelph is (as far as I’m aware) the first Pint of Science (2019) in Canada to have its own beer. Thanks to the awesome folks at Wellington Brewery, a small team of Pint of Science Guelph volunteers and speakers spent last Friday at the brewery learning about the brewing process by making a Brut IPA. This tasty beverage will be available as part of the Pint of Science celebration. Just order it by name – Brain Storm IPA.
The event starts promptly at 8pm (doors open at 7:30pm). $5.00-10.00 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.
SciCATs (Science Communication Action Team, uh, something) is a collective of science communicators (and cat fans) providing free, open source, online, skills-based science communication training, resources, and in-person workshops.
We believe that anyone, anywhere should be able to learn the why and the how of science communication!
For the past two years, SciCATs has been developing online resources and delivering science communication workshops to diverse groups of those interested in science communication. We are now hosting an open, public event to help a broader audience of those passionate about science to mix, mingle, and build their science communication skills – all while having fun.
SciCATs’ Fundamentals of Science Communication is a three-hour interactive workshop [emphasis mine] followed by one hour of networking.
For this event, our experienced SciCATs facilitators will lead the audience through our most-requested science communication modules: Why communicate science Finding your message Telling your science as a story Understanding your audience [emphasis mine]
This workshop is ideal for people who are new to science communication [empahsis mine] or those who are more experienced. You might be an undergraduate or graduate student, researcher, technician, or other roles that have an interest in talking to the public about what you do. Perhaps you just want to hang out and meet some local science communicators. This is a great place to do it!
After the workshop we have a reservation at Chaqui Grill (1955 Cornwall), it will be a great opportunity to continue to network with all of the Sci-Cats and science communicators that attend over a beverage! They do have a full dinner menu as well.
Date and Time Sun, May 5, 2019 2:00 PM – 5:00 PM PDT
Location H.R. MacMillan Space Centre 1100 Chestnut Street Vancouver, BC V6J 3J9
Refund Policy Refunds up to 1 day before event
You can find out more about SciCats and its online resources here.
da Vinci in Canada from May 2 to September 2, 2019
This show is a big deal and it’s about to open in Ottawa in our national Science and Technology Museum (one of the Ingenium museums of science), which makes it national in name and local in practice since most of us will not make it to Ottawa during the show’s run.
Canada Science and Technology Museum from May 2 to September 2, 2019.
For the first time in Canada, the Canada Science and Technology Museum presents Leonardo da Vinci – 500 Years of Genius, the most comprehensive exhibition experience on Leonardo da Vinci to tour the world. Created by Grande Exhibitions in collaboration with the Museo Leonardo da Vinci in Rome and a number of experts and historians from Italy and France, this interactive experience commemorates 500 years of Leonardo’s legacy, immersing visitors in his extraordinary life like never before.
Demonstrating the full scope of Leonardo da Vinci’s achievements, Leonardo da Vinci – 500 Years of Genius celebrates one of the most revered and dynamic intellects of all time. Revolutionary SENSORY4™ technology allows visitors to take a journey into the mind of the ultimate Renaissance man for the very first time.
Discover for yourself the true genius of Leonardo as an inventor, artist, scientist, anatomist, engineer, architect, sculptor and philosopher. See and interact with over 200 unique displays, including machine inventions, life-size reproductions of Leonardo’s Renaissance art, entertaining animations giving insight into his most notable works, and touchscreen versions of his actual codices.
Leonardo da Vinci – 500 Years of Genius also includes the world’s exclusive Secrets of Mona Lisa exhibition – an analysis of the world’s most famous painting, conducted at the Louvre Museum by renowned scientific engineer, examiner and photographer of fine art Pascal Cotte.
Whether you are a history aficionado or discovering Leonardo for the first time, Leonardo da Vinci – 500 Years of Genius is an entertaining, educational and enlightening experience the whole family will love.
For a change I’ve placed the video after its transcript,
The April 30, 2019 Ingenium announcement (received via email) hints at something a little more exciting than walking around and looking at cases,
Discover the true genius of Leonardo as an inventor, artist, scientist, anatomist, engineer, architect, sculptor, and philosopher. See and interact with more than 200 unique displays, including machine inventions, life-size reproductions of Leonardo’s Renaissance art, touchscreen versions of his life’s work, and an immersive, walkthrough cinematic experience. Leonardo da Vinci – 500 Years of Genius [includes information about entry fees] the exclusive Secrets of Mona Lisa exhibition – an analysis of the world’s most famous painting.
I imagine there will be other events associated with this exhbition but for now there’s an opening night event, which is part of the museum’s Curiosity on Stage series (ticket purchase here),
Curiosity on Stage: Evening Edition – Leonardo da Vinci: 500 Years of Genius
Join the Italian Embassy and the Canada Science and Technology Museum for an evening of discussion and discovery on the quintessential Renaissance man, Leonardo da Vinci. Invited speakers from the Galileo Museum in Italy, Carleton University, and the University of Ottawa will explore the historical importance of da Vinci’s diverse body of work, as well as the lasting impact of his legacy on science, technology, and art in our age.
Be among the first to visit the all-new exhibition “Leonardo da Vinci – 500 Years of Genius”! Your Curiosity on Stage ticket will grant you access to the exhibit in its entirety, which includes life-size reproductions of Leonardo’s art, touchscreen versions of his codices, and so much more!
Speakers: Andrea Bernardoni (Galileo Museum) – Senior Researcher Angelo Mingarelli (Carleton University) – Mathematician Hanan Anis (University of Ottawa) – Professor in Electrical and Computer Engineering Lisa Leblanc (Canada Science and Technology Museum) – Director General; Panel Moderator
Join the conversation and share your thoughts using the hashtag #CuriosityOnStage.
Agenda: 5:00 – 6:30 pm: Explore the “Leonardo da Vinci: 500 Years of Genius” exhibit. Light refreshments and networking opportunities. 6:30 – 8:30 pm: Presentations and Panel discussion Cost: Members: $7 Students: $7 with discount code “SALAI” (valid student ID required on night of event) Non-members: $10 *Parking fees are included with admission.
Tickets are not yet sold out.
#Museum Week 2019
#Museum Week (website) is being billed as “The first worldwide cultural event on social networks. The latest edition is being held from May 13 – 19, 2019. As far as I’m aware, it’s held on Twitter exclusively. You can check out the hash tag feed (#Museum Week) as it’s getting quite active even now.
They don’t have a list of participants for this year which leaves me feeling a little sad. It’s kind of fun to check out how many and which institutions in your country are planning to participate. I would have liked to have seen whether or not the Canada Science and Technology Museum and Science World Vancouver will be there. (I think both participated last year.) Given how busy the hash tag feed becomes during the event, I’m not likely to see them on it even if they’re tweeting madly.
May 2019 looks to be a very busy month for Canadian science enthusiasts! No matter where you are there is something for you.
This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further exploration of robot and AI ethics issues..
What is a robot?
There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.
We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),
Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.
In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.” In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.
The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka.  
In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs. There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly. In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.
The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours. His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.
In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.
In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet. Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.
The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)
‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot. The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent). Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.
I’m particularly fascinated by how long humans have been imagining and creating robots.
Robot ethics in Vancouver
The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),
Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.
Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?
This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.
According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.
A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.
At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.
I’ll get back to the impact that robots might have on us in part two but first,
Sexbots, could they kill?
For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),
Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.
Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”
Maldonado also embedded this video into her piece,
A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a desirable trait in a sexbot.
Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),
Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.
Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.
Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …
Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),
I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.
Sexbots as spies
This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),
One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.
But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:
“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”
Does that make you think twice about a sexbot?
Robots and artificial intelligence
Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,
As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.
“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.
Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.
For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.
To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,
… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”
This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.
8 February – 3 September 2017, Science Museum, London
Admission: £15 adults, £13 concessions (Free entry for under 7s; family tickets available)
Tickets available in the Museum or via sciencemuseum.org.uk/robots
Supported by the Heritage Lottery Fund
Throughout history, artists and scientists have sought to understand what it means to be human. The Science Museum’s new Robots exhibition, opening in February 2017, will explore this very human obsession to recreate ourselves, revealing the remarkable 500-year story of humanoid robots.
Featuring a unique collection of over 100 robots, from a 16th-century mechanical monk to robots from science fiction and modern-day research labs, this exhibition will enable visitors to discover the cultural, historical and technological context of humanoid robots. Visitors will be able to interact with some of the 12 working robots on display. Among many other highlights will be an articulated iron manikin from the 1500s, Cygan, a 2.4m tall 1950s robot with a glamorous past, and one of the first walking bipedal robots.
Robots have been at the heart of popular culture since the word ‘robot’ was first used in 1920, but their fascinating story dates back many centuries. Set in five different periods and places, this exhibition will explore how robots and society have been shaped by religious belief, the industrial revolution, 20th century popular culture and dreams about the future.
The quest to build ever more complex robots has transformed our understanding of the human body, and today robots are becoming increasingly human, learning from mistakes and expressing emotions. In the exhibition, visitors will go behind the scenes to glimpse recent developments from robotics research, exploring how roboticists are building robots that resemble us and interact in human-like ways. The exhibition will end by asking visitors to imagine what a shared future with robots might be like. Robots has been generously supported by the Heritage Lottery Fund, with a £100,000 grant from the Collecting Cultures programme.
Ian Blatchford, Director of the Science Museum Group said: ‘This exhibition explores the uniquely human obsession of recreating ourselves, not through paint or marble but in metal. Seeing robots through the eyes of those who built or gazed in awe at them reveals much about humanity’s hopes, fears and dreams.’
‘The latest in our series of ambitious, blockbuster exhibitions, Robots explores the wondrously rich culture, history and technology of humanoid robotics. Last year we moved gigantic spacecraft from Moscow to the Museum, but this year we will bring a robot back to life.’
Today [May ?, 2016] the Science Museum launched a Kickstarter campaign to rebuild Eric, the UK’s first robot. Originally built in 1928 by Captain Richards & A.H. Reffell, Eric was one of the world’s first robots. Built less than a decade after the word robot was first used, he travelled the globe with his makers and amazed crowds in the UK, US and Europe, before disappearing forever.
Getting back to the exhibition, the Guardian’s Ian Sample has written up a Feb. 7, 2017 preview (Note: Links have been removed),
Eric the robot wowed the crowds. He stood and bowed and answered questions as blue sparks shot from his metallic teeth. The British creation was such a hit he went on tour around the world. When he arrived in New York, in 1929, a theatre nightwatchman was so alarmed he pulled out a gun and shot at him.
The curators at London’s Science Museum hope for a less extreme reaction when they open Robots, their latest exhibition, on Wednesday [Feb. 8, 2016]. The collection of more than 100 objects is a treasure trove of delights: a miniature iron man with moving joints; a robotic swan that enthralled Mark Twain; a tiny metal woman with a wager cup who is propelled by a mechanism hidden up her skirt.
The pieces are striking and must have dazzled in their day. Ben Russell, the lead curator, points out that most people would not have seen a clock when they first clapped eyes on one exhibit, a 16th century automaton of a monk [emphasis mine], who trundled along, moved his lips, and beat his chest in contrition. It was surely mesmerising to the audiences of 1560. “Arthur C Clarke once said that any sufficiently advanced technology is indistinguishable from magic,” Russell says. “Well, this is where it all started.”
In every chapter of the 500-year story, robots have held a mirror to human society. Some of the earliest devices brought the Bible to life. One model of Christ on the cross rolls his head and oozes wooden blood from his side as four figures reach up. The mechanisation of faith must have drawn the congregations as much as any sermon.
But faith was not the only focus. Through clockwork animals and human figurines, model makers explored whether humans were simply conscious machines. They brought order to the universe with orreries and astrolabes. The machines became more lighthearted in the enlightened 18th century, when automatons of a flute player, a writer, and a defecating duck all made an appearance. A century later, the style was downright rowdy, with drunken aristocrats, preening dandies and the disturbing life of a sausage from farm to mouth all being recreated as automata.
That reference to an automaton of a monk reminded me of a July 22, 2009 posting where I excerpted a passage (from another blog) about a robot priest and a robot monk,
Since 1993 Robo-Priest has been on call 24-hours a day at Yokohama Central Cemetery. The bearded robot is programmed to perform funerary rites for several Buddhist sects, as well as for Protestants and Catholics. Meanwhile, Robo-Monk chants sutras, beats a religious drum and welcomes the faithful to Hotoku-ji, a Buddhist temple in Kakogawa city, Hyogo Prefecture. More recently, in 2005, a robot dressed in full samurai armour received blessings at a Shinto shrine on the Japanese island of Kyushu. Kiyomori, named after a famous 12th-century military general, prayed for the souls of all robots in the world before walking quietly out of Munakata Shrine.
Sample’s preview takes the reader up to our own age and contemporary robots. And, there is another Guardian article which offering a behind-the-scenes look at the then upcoming exhibition, a Jan. 28, 2016 piece by Jonathan Jones, ,
An android toddler lies on a pallet, its doll-like face staring at the ceiling. On a shelf rests a much more grisly creation that mixes imitation human bones and muscles, with wires instead of arteries and microchips in place of organs. It has no lower body, and a single Cyclopean eye. This store room is an eerie place, then it gets more creepy, as I glimpse behind the anatomical robot a hulking thing staring at me with glowing red eyes. Its plastic skin has been burned off to reveal a metal skeleton with pistons and plates of merciless strength. It is the Terminator, sent back in time by the machines who will rule the future to ensure humanity’s doom.
Backstage at the Science Museum, London, where these real experiments and a full-scale model from the Terminator films are gathered to be installed in the exhibition Robots, it occurs to me that our fascination with mechanical replacements for ourselves is so intense that science struggles to match it. We think of robots as artificial humans that can not only walk and talk but possess digital personalities, even a moral code. In short we accord them agency. Today, the real age of robots is coming, and yet even as these machines promise to transform work or make it obsolete, few possess anything like the charisma of the androids of our dreams and nightmares.
That’s why, although the robotic toddler sleeping in the store room is an impressive piece of tech, my heart leaps in another way at the sight of the Terminator. For this is a bad robot, a scary robot, a robot of remorseless malevolence. It has character, in other words. Its programmed persona (which in later films becomes much more helpful and supportive) is just one of those frightening, funny or touching personalities that science fiction has imagined for robots.
Can the real life – well, real simulated life – robots in the Science Museum’s new exhibition live up to these characters? The most impressively interactive robot in the show will be RoboThespian, who acts as compere for its final gallery displaying the latest advances in robotics. He stands at human height, with a white plastic face and metal arms and legs, and can answer questions about the value of pi and the nature of free will. “I’m a very clever robot,” RoboThespian claims, plausibly, if a little obnoxiously.
Except not quite as clever as all that. A human operator at a computer screen connected with Robothespian by wifi is looking through its video camera eyes and speaking with its digital voice. The result is huge fun – the droid moves in very lifelike ways as it speaks, and its interactions don’t need a live operator as they can be preprogrammed. But a freethinking, free-acting robot with a mind and personality of its own, Robothespian is not.
Our fascination with synthetic humans goes back to the human urge to recreate life itself – to reproduce the mystery of our origins. Artists have aspired to simulate human life since ancient times. The ancient Greek myth of Pygmalion, who made a statue so beautiful he fell in love with it and prayed for it to come to life, is a mythic version of Greek artists such as Pheidias and Praxiteles whose statues, with their superb imitation of muscles and movement, seem vividly alive. The sculptures of centaurs carved for the Parthenon in Athens still possess that uncanny lifelike power.
Most of the finest Greek statues were bronze, and mythology tells of metal robots that sound very much like statues come to life, including the bronze giant Talos, who was to become one of cinema’s greatest robotic monsters thanks to the special effects genius of Ray Harryhausen in Jason and the Argonauts.
Renaissance art took the quest to simulate life to new heights, with awed admirers of Michelangelo’s David claiming it even seemed to breathe (as it really does almost appear to when soft daylight casts mobile shadow on superbly sculpted ribs). So it is oddly inevitable that one of the first recorded inventors of robots was Leonardo da Vinci, consummate artist and pioneering engineer. Leonardo apparently made, or at least designed, a robot knight to amuse the court of Milan. It worked with pulleys and was capable of simple movements. Documents of this invention are frustratingly sparse, but there is a reliable eyewitness account of another of Leonardo’s automata. In 1515 he delighted Francois I, king of France, with a robot lion that walked forward towards the monarch, then released a bunch of lilies, the royal flower, from a panel that opened in its back.
One of the most uncanny androids in the Science Museum show is from Japan, a freakily lifelike female robot called Kodomoroid, the world’s first robot newscaster. With her modest downcast gaze and fine artificial complexion, she has the same fetishised femininity you might see in a Manga comic and appears to reflect a specific social construction of gender. Whether you read that as vulnerability or subservience, presumably the idea is to make us feel we are encountering a robot with real personhood. Here is a robot that combines engineering and art just as Da Vinci dreamed – it has the mechanical genius of his knight and the synthetic humanity of his perfect portrait.
I’ve never really understood the mania for digging up bodies of famous people in history and trying to ascertain how the person really died or what kind of diseases they may have had but the practice fascinates me. The latest famous person to be subjected to a forensic inquiry centuries after death is Leonardo da Vinci. A May 5, 2016 Human Evolution (journal) news release on EurekAlert provides details,
A team of eminent specialists from a variety of academic disciplines has coalesced around a goal of creating new insight into the life and genius of Leonardo da Vinci by means of authoritative new research and modern detective technologies, including DNA science.
The Leonardo Project is in pursuit of several possible physical connections to Leonardo, beaming radar, for example, at an ancient Italian church floor to help corroborate extensive research to pinpoint the likely location of the tomb of his father and other relatives. A collaborating scholar also recently announced the successful tracing of several likely DNA relatives of Leonardo living today in Italy (see endnotes).
If granted the necessary approvals, the Project will compare DNA from Leonardo’s relatives past and present with physical remnants — hair, bones, fingerprints and skin cells — associated with the Renaissance figure whose life marked the rebirth of Western civilization.
The Project’s objectives, motives, methods, and work to date are detailed in a special issue of the journal Human Evolution, published coincident with a meeting of the group hosted in Florence this week under the patronage of Eugenio Giani, President of the Tuscan Regional Council (Consiglio Regionale della Toscana).
The news release goes on to provide some context for the work,
Born in Vinci, Italy, Leonardo died in 1519, age 67, and was buried in Amboise, southwest of Paris. His creative imagination foresaw and described innovations hundreds of years before their invention, such as the helicopter and armored tank. His artistic legacy includes the iconic Mona Lisa and The Last Supper.
The idea behind the Project, founded in 2014, has inspired and united anthropologists, art historians, genealogists, microbiologists, and other experts from leading universities and institutes in France, Italy, Spain, Canada and the USA, including specialists from the J. Craig Venter Institute of California, which pioneered the sequencing of the human genome.
The work underway resembles in complexity recent projects such as the successful search for the tomb of historic author Miguel de Cervantes and, in March 2015, the identification of England’s King Richard III from remains exhumed from beneath a UK parking lot, fittingly re-interred 500 years after his death.
Like Richard, Leonardo was born in 1452, and was buried in a setting that underwent changes in subsequent years such that the exact location of the grave was lost.
If DNA and other analyses yield a definitive identification, conventional and computerized techniques might reconstruct the face of Leonardo from models of the skull.”
In addition to Leonardo’s physical appearance, information potentially revealed from the work includes his ancestry and additional insight into his diet, state of health, personal habits, and places of residence.
According to the news release, the researchers have an agenda that goes beyond facial reconstruction and clues about ancestry and diet,
Beyond those questions, and the verification of Leonardo’s “presumed remains” in the chapel of Saint-Hubert at the Château d’Amboise, the Project aims to develop a genetic profile extensive enough to understand better his abilities and visual acuity, which could provide insights into other individuals with remarkable qualities.
It may also make a lasting contribution to the art world, within which forgery is a multi-billion dollar industry, by advancing a technique for extracting and sequencing DNA from other centuries-old works of art, and associated methods of attribution.
Says Jesse Ausubel, Vice Chairman of the Richard Lounsbery Foundation, sponsor of the Project’s meetings in 2015 and 2016: “I think everyone in the group believes that Leonardo, who devoted himself to advancing art and science, who delighted in puzzles, and whose diverse talents and insights continue to enrich society five centuries after his passing, would welcome the initiative of this team — indeed would likely wish to lead it were he alive today.”
The researchers aim to have the work complete by 2019,
In the journal, group members underline the highly conservative, precautionary approach required at every phase of the Project, which they aim to conclude in 2019 to mark the 500th anniversary of Leonardo’s death.
For example, one objective is to verify whether fingerprints on Leonardo’s paintings, drawings, and notebooks can yield DNA consistent with that extracted from identified remains.
Early last year, Project collaborators from the International Institute for Humankind Studies in Florence opened discussions with the laboratory in that city where Leonardo’s Adoration of the Magi has been undergoing restoration for nearly two years, to explore the possibility of analyzing dust from the painting for possible DNA traces. A crucial question is whether traces of DNA remain or whether restoration measures and the passage of time have obliterated all evidence of Leonardo’s touch.
In preparation for such analysis, a team from the J. Craig Venter Institute and the University of Florence is examining privately owned paintings believed to be of comparable age to develop and calibrate techniques for DNA extraction and analysis. At this year’s meeting in Florence, the researchers also described a pioneering effort to analyze the microbiome of a painting thought to be about five centuries old.
If human DNA can one day be obtained from Leonardo’s work and sequenced, the genetic material could then be compared with genetic information from skeletal or other remains that may be exhumed in the future.
Here’s a list of the participating organizations (from the news release),
The Institut de Paléontologie Humaine, Paris
The International Institute for Humankind Studies, Florence
The Laboratory of Molecular Anthropology and Paleogenetics, Biology Department, University of Florence
Museo Ideale Leonardo da Vinci, in Vinci, Italy
J. Craig Venter Institute, La Jolla, California
Laboratory of Genetic Identification, University of Granada, Spain
The Rockefeller University, New York City
You can find the special issue of Human Evolution (HE Vol. 31, 2016 no. 3) here. The introductory essay is open access but the other articles are behind a paywall.
One of Leonardo da Vinci’s masterpieces, drawn in red chalk on paper during the early 1500s and widely believed to be a self-portrait, is in extremely poor condition. Centuries of exposure to humid storage conditions or a closed environment has led to widespread and localized yellowing and browning of the paper, which is reducing the contrast between the colors of chalk and paper and substantially diminishing the visibility of the drawing.
A group of researchers from Italy and Poland with expertise in paper degradation mechanisms was tasked with determining whether the degradation process has now slowed with appropriate conservation conditions — or if the aging process is continuing at an unacceptable rate.
Caption: This is Leonardo da Vinci’s self-portrait as acquired during diagnostic studies carried out at the Central Institute for the Restoration of Archival and Library Heritage in Rome, Italy. Credit: M. C. Misiti/Central Institute for the Restoration of Archival and Library Heritage, Rome
… the team developed an approach to nondestructively identify and quantify the concentration of light-absorbing molecules known as chromophores in ancient paper, the culprit behind the “yellowing” of the cellulose within ancient documents and works of art.
“During the centuries, the combined actions of light, heat, moisture, metallic and acidic impurities, and pollutant gases modify the white color of ancient paper’s main component: cellulose,” explained Joanna Łojewska, a professor in the Department of Chemistry at Jagiellonian University in Krakow, Poland. “This phenomenon is known as ‘yellowing,’ which causes severe damage and negatively affects the aesthetic enjoyment of ancient art works on paper.”
Chromophores are the key to understanding the visual degradation process because they are among the chemical products developed by oxidation during aging and are, ultimately, behind the “yellowing” within cellulose. Yellowing occurs when “chromophores within cellulose absorb the violet and blue range of visible light and largely scatter the yellow and red portions — resulting in the characteristic yellow-brown hue,” said Olivia Pulci, a professor in the Physics Department at the University of Rome Tor Vergata.
To determine the degradation rate of Leonardo’s self-portrait, the team created a nondestructive approach that centers on identifying and quantifying the concentration of chromophores within paper. It involves using a reflectance spectroscopy setup to obtain optical reflectance spectra of paper samples in the near-infrared, visible, and near-ultraviolet wavelength ranges.
Once reflectance data is gathered, the optical absorption spectrum of cellulose fibers that form the sheet of paper can be calculated using special spectroscopic data analysis.
Then, computational simulations based on quantum mechanics — in particular, Time-Dependent Density Functional Theory, which plays a key role in studying optical properties in theoretical condensed matter physics — are tapped to calculate the optical absorption spectrum of chromophores in cellulose.
“Using our approach, we were able to evaluate the state of degradation of Leonardo da Vinci’s self-portrait and other paper specimens from ancient books dating from the 15th century,” said Adriano Mosca Conte, a researcher at the University of Rome Tor Vergata. “By comparing the results of ancient papers with those of artificially aged samples, we gained significant insights into the environmental conditions in which Leonardo da Vinci’s self-portrait was stored during its lifetime.”
Their work revealed that the type of chromophores present in Leonardo’s self portrait are “similar to those found in ancient and modern paper samples aged in extremely humid conditions or within a closed environment, which agrees with its documented history,” said Mauro Missori, a researcher at the Institute for Complex Systems, CNR, in Rome, Italy.
One of the most significant implications of their work is that the state of degradation of ancient paper can be measured and quantified by evaluation of the concentrations of chromophores in cellulose fibers. “The periodic repetition of our approach is fundamental to establishing the formation rate of chromophores within the self-portrait. Now our approach can serve as a precious tool to preserve and save not only this invaluable work of art, but others as well,” Conte noted.
Absolutely fascinating stuff to those of use who care about yellowing paper. (Having worked in an archives, I care deeply.) Here’s a link to and a citation for the study,