Mildred Dresselhaus (Queen of Carbon) gets a book

She died in 2017 and left behind a legacy many would envy. From a March 8, 2022 book review by Jess Wade for Physics World (Note: Links have been removed),

Mildred Dresselhaus, materials-science pioneer and nanotechnology trailblazer, should be a household name. Her contributions to science were immense: unravelling the electronic structure of carbon and paving the way for the discovery of fullerenes, carbon nanotubes and graphene. She was the first woman to be appointed Institute Professor at the Massachusetts Institute of Technology (MIT), which is the highest title that is awarded there. She was also the first woman to win a National Medal of Science in the category of engineering (awarded by the US president) and the first individual winner of the Kavli Prize in Nanoscience.

Dresselhaus’ resilience and determination meant that she succeeded in a world that was not welcoming to her. At the time, a lot of people still believed that “a woman’s place is in the home”. Her contributions to nanoscience were nothing short of incredible. She studied thermoelectric materials, as well as the magnetic, optical and electrical properties of semimetals, creating novel nanomaterials that provided the foundation for lithium-ion batteries, fullerenes and carbon nanotubes. Her attention to detail and creativity allowed her to formulate the design rules for nanomaterials, with a focus on sustainability.

Now, there is a book, “Carbon Queen: The Remarkable Life of Nanoscience Pioneer Mildred Dresselhaus” (2022) by Maia Weinstock. Slate.com features a March 13, 2022 posting of an excerpt from the book,

The late 1940s encompassed a unique period for women in science in the United States. After scores of women had entered scientific, technological, engineering, and mathematical fields for the first time to support the war effort, American women were routinely discouraged from pursuing STEM [science, technology, engineering, and mathematics] careers in the postwar era. Many top colleges and universities refused to admit women as students until the late 1960s or early 1970s. Women of color were particularly hard to find in labs and in scientific journals during the mid-twentieth century.

This was the climate in which Mildred “Millie” Dresselhaus found herself when she first enrolled as an undergraduate at Hunter College in New York City in 1948. Dresselhaus would eventually become a decorated MIT physicist, making highly influential discoveries about the properties of materials. Based on her far-reaching foundational research, scientists and engineers have made enormous advances at the nanoscale—discovering structures like spherical carbon “buckyballs,” cylindrical carbon nanotubes, and 2D carbon sheets known as graphene that have made products from aircraft to cellphones stronger, lighter, and more efficient. …

There are postings here about Mildred Dresselhaus and her work with the last in 2017 being an RIP posting.

Biohybrid fish made from human cardiac cells could lead to artificial hearts

Biohybrid fish on a hook (Photo credit to Michael Rosnach, Keel Yong Lee, Sung-Jin Park, Kevin Kit Parker)

A February 10, 2022 news item on ScienceDaily announces research on a biohybrid fish,

Harvard University researchers, in collaboration with colleagues from Emory University, have developed the first fully autonomous biohybrid fish from human stem-cell derived cardiac muscle cells. The artificial fish swims by recreating the muscle contractions of a pumping heart, bringing researchers one step closer to developing a more complex artificial muscular pump and providing a platform to study heart disease like arrhythmia.

A February 10, 2022 Harvard University John A. Paulson School of Engineering and Applied Sciences news release (also on EurekAlert) by Leah Burrows explains how this research could lead to an artificial heart (Note: Links have been removed),

“Our ultimate goal is to build an artificial heart to replace a malformed heart in a child,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and senior author of the paper.  “Most of the work in building heart tissue or hearts, including some work we have done, is focused on replicating the anatomical features or replicating the simple beating of the heart in the engineered tissues. But here, we are drawing design inspiration from the biophysics of the heart, which is harder to do. Now, rather than using heart imaging as a blueprint, we are identifying the key biophysical principles that make the heart work, using them as design criteria, and replicating them in a system, a living, swimming fish, where it is much easier to see if we are successful.”

The research is published in Science

The biohybrid fish developed by the team builds off previous research from Parker’s Disease Biophysics Group. In 2012, the lab used cardiac muscle cells from rats to build a jellyfish-like biohybrid pump and in 2016 the researchers developed a swimming, artificial stingray also from rat heart muscle cells.

In this research, the team built the first autonomous biohybrid device made from human stem-cell derived cardiomyocytes. This device was inspired by the shape and swimming motion of a zebrafish. Unlike previous devices, the biohybrid zebrafish has two layers of muscle cells, one on each side of the tail fin. When one side contracts, the other stretches. That stretch triggers the opening of a mechanosensitive protein channel, which causes a contraction, which triggers a stretch and so on and so forth, leading to a closed loop system that can propel the fish for more than 100 days. 

“By leveraging cardiac mechano-electrical signaling between two layers of muscle, we recreated the cycle where each contraction results automatically as a response to the stretching on the opposite side,” said Keel Yong Lee, a postdoctoral fellow at SEAS and co-first author of the study. “The results highlight the role of feedback mechanisms in muscular pumps such as the heart.”

The researchers also engineered an autonomous pacing node, like a pacemaker, which controls the frequency and rhythm of these spontaneous contractions. Together, the two layers of muscle and the autonomous pacing node enabled the generation of continuous, spontaneous, and coordinated, back-and-forth fin movements.

“Because of the two internal pacing mechanisms, our fish can live longer, move faster and swim more efficiently than previous work,” said Sung-Jin Park, a former postdoctoral fellow in the Disease Biophysics Group at SEAS and co-first author of the study. “This new research provides a model to investigate mechano-electrical signaling as a therapeutic target of heart rhythm management and for understanding pathophysiology in sinoatrial node dysfunctions and cardiac arrhythmia.”

Park is currently an Assistant Professor at the Coulter Department of Biomedical Engineering at Georgia Institute of Technology and Emory University School of Medicine.

Unlike a fish in your refrigerator, this biohybrid fish improves with age. Its muscle contraction amplitude, maximum swimming speed, and muscle coordination all increased for the first month as the cardiomyocyte cells matured.  Eventually, the biohybrid fish reached speeds and swimming efficacy similar to zebrafish in the wild. 

Next, the team aims to build even more complex biohybrid devices from human heart cells. 

“I could build a model heart out of Play-Doh, it doesn’t mean I can build a heart,” said Parker. “You can grow some random tumor cells in a dish until they curdle into a throbbing lump and call it a cardiac organoid. Neither of those efforts is going to, by design, recapitulate the physics of a system that beats over a billion times during your lifetime while simultaneously rebuilding its cells on the fly. That is the challenge. That is where we go to work.”

The research was co-authored by David G. Matthews, Sean L. Kim, Carlos Antonio Marquez, John F. Zimmerman, Herdeline Ann M. Ardona, Andre G. Kleber and George V. Lauder. 

It was supported in part by National Institutes of Health National Center for Advancing Translational Sciences grant UH3TR000522, and National Science Foundation Materials Research Science and Engineering Center grant DMR-142057.

Before giving you a link and a citation for the paper, here’s a little more information about the work from a February 10, 2022 American Association for the Advancement of Science (AAAS) news release on EurekAlert announcing publication of the paper in their journal Science, Note: A link has been removed,

An autonomously swimming biohybrid fish, designed with a focus on two key regulatory features of the human heart, has revealed the importance of feedback mechanisms in muscular pumps (such as the heart). The findings could one day help inform the development of an artificial heart made from living muscle cells. Biohybrid systems – devices containing both biological and artificial components – are an effective way to investigate the physiological control mechanisms in biological organisms and to discover bio-inspired robotic solutions to a host of pressing concerns, including those related to human health. When it comes to natural fluid transport pumps, like those that circulate blood, the performance of biohybrid systems has been lacking, however.  Here, researchers considered whether two functional regulatory features of the heart — mechanoelectrical signaling and automaticity — could be transferred to a synthetic analog of another fluid transport system: a swimming fish. Lee et al. developed an autonomously swimming fish constructed from a bilayer of human cardiac cells; the muscular bilayer was integrated using tissue engineering techniques. Lee and team were able to control muscle contractions in the biohybrid fish using external optogenetic stimulation, allowing the fish analog to swim. In tests, the biohybrid fish outperformed the locomotory speed of previous biohybrid muscular systems, the authors say. It maintained spontaneous activity for 108 days. By contrast, say the authors, biohybrid fish equipped with single-layered muscle showed deteriorating activity within the first month. The data in this study demonstrate the potential of muscular bilayer systems and mechanoelectrical signaling as a means to promote maturation of in vitro muscle tissues, write Lee and colleagues. “Taken together,” the authors conclude, “the technology described here may represent foundational work toward the goal of creating autonomous systems capable of homeostatic regulation and adaptive behavioral control.”

For reporters interested in trends, this work builds upon previous work published in a July 2016 study in Science, in which Sung-jin Park et al. used cardiac cells from rats to develop a self-propelling ray fish analog.

Here’s a link to and a citation for the paper,

An autonomously swimming biohybrid fish designed with human cardiac biophysics by Keel Yong Lee, Sung-Jin Park, David G. Matthews. Sean L. Kim, Carlos Antonio Marquez, John F. Zimmerman, Herdeline Ann M. Ardoña, Andre G. Kleber, George V. Lauder and Kevin Kit Parker. Science • 10 Feb 2022 • Vol 375, Issue 6581 • pp. 639-647 • DOI: 10.1126/science.abh0474

This paper is behind a paywall.

Illustrating math at the University of Saskatchewan (Canada)

Art and math intersect in Dr. Steven Rayan’s work on quantum materials at the University of Saskatchewan (USask).

An illustration by Elliot Kienzle (undergraduate research assistant, quanTA Centre, USask) of a hyperbolic crystal in action

A May 2, 2022 USask news release (also received via email) describes Rayan’s work in more detail,

Art and mathematics may go hand-in-hand when building new and better materials for use in quantum computing and other quantum applications, according to University of Saskatchewan (USask) mathematician Dr. Steven Rayan (PhD).

Quantum materials are what futuristic dreams are made of. Such materials are able to efficiently conduct and insulate electric currents – the everyday equivalent of never having a lightbulb flicker. Quantum materials may be the fabric of tomorrow’s supercomputers, ones that can quickly and accurately analyze and solve problems to a degree far beyond what was previously thought possible.

“Before the 1700s, people were amazed that metals could be melted down and reshaped to suit their needs, be it the need for building materials or for tools. There was no thought that, perhaps, metals were capable of something much more — such as conducting electricity,” said Rayan, an associate professor of mathematics and statistics in the USask College of Arts and Science who also serves as the director of the USask Centre for Quantum Topology and its Applications (quanTA).

“Today, we’re at a similar juncture. We may be impressed with what materials are capable of right now, but tomorrow’s materials will redefine our expectations. We are standing at a doorway and on the other side of it is a whole new world of materials capable of things that we previously could not imagine.”

Many conducting materials exhibit a crystal-like structure that consists of tiny cells repeating over and over. Previous research published in Science Advances had highlighted Rayan and University of Alberta physicist Dr. Joseph Maciejko’s (PhD) success in defining a new type of quantum material that does not follow a typical crystal structure but instead consists of “hyperbolic” crystals that are warped and curved. 

“This is an immense paradigm shift in the understanding of what it means to be a ‘material’,” said Rayan.

It is expected that hyperbolic materials will exhibit the perfect conductivity of current quantum materials, but at slightly higher temperatures. Today’s quantum materials often need to be supercooled to extremely low temperatures to reach their full potential. Maintaining such temperatures is an obstacle to implementing widespread quantum computing, which has the potential to impact information security, drug design, vaccine development, and other crucial tasks. Hyperbolic materials may be part of the solution to this problem.

Hyperbolic materials may also be the key to new types of sensors and medical imaging devices, such as magnetic resonance imaging (MRI) machines that take advantage of quantum effects in order to be more lightweight for use in rural or remote environments.

USask recently named Quantum Innovation as one of its three new signature areas of research [Note: Link removed] to respond to emerging questions and needs in the pursuit of new knowledge.

“All of this comes at the right time, as new technologies like quantum computers, quantum sensors, and next-generation fuel cells are putting new demands on materials and exposing the limits of existing components,” said Rayan.

This year has seen two new articles by Rayan together with co-authors extending previous research of hyperbolic materials. The first is written with Maciejko and appears in the prestigious journal Proceedings of the National Academy of Sciences (PNAS). The second has been written with University of Maryland undergraduate student Elliot Kienzle, who served as a USask quanTA research assistant under Rayan’s supervision in summer of 2021.

In these two articles, the power of mathematics used to study quantum and hyperbolic crystals is significantly extended through the use of tools from geometry. These tools have not typically been applied to the study of materials. The results will make it much easier for scientists experimenting with hyperbolic materials to make accurate predictions about how they will behave as electrical conductors.

Reflecting on the initial breakthrough of considering hyperbolic geometry rather than ordinary geometry, Rayan said, “What is interesting is that these warped crystals have appeared in mathematics for over 100 years as well as in art – for instance, in the beautiful woodcuts of M.C. Escher – and it is very satisfying to see these ideas practically applied in science.”

The work also intersects with art in another way. The article with Kienzle, which was released in pre-publication form on February 1, 2022 [sic], was accompanied by exclusive hand drawings provided by Kienzle. With concepts in mathematics and physics often being difficult to visualize, the artwork helps the work to come to life and invites everyone to learn about the function and power of quantum materials. 

The artwork, which is unusual for mathematics or physics papers, has garnered a lot of positive attention on social media.

“Elliot is tremendously talented not only as an emerging researcher in mathematics and physics, but also as an artist,” said Rayan. “His illustrations have added a new dimension to our work, and I hope that this is the start of a new trend in these types of papers where the quality and creativity of illustrations are as important as the correctness of equations.”

Here are links to and citations for both of Rayan’s most recent papers,

Hyperbolic band theory through Higgs bundles by Elliot Kienzle and Steven Rayan. arXiv:2201.12689 (or arXiv:2201.12689v1 [math-ph] for this version) DOI: https://doi.org/10.48550/arXiv.2201.1268 Submitted on 30 Jan 2022

This paper is open access and open for peer review.

Automorphic Bloch theorems for hyperbolic lattices by Joseph Maciejko and Steven Rayan. PNAS February 25, 2022 | 119 (9) e2116869119 DOI: https://doi.org/10.1073/pnas.2116869119

This peer-reviewed paper is behind a paywall.

AI & creativity events for August and September 2022 (mostly)

This information about these events and papers comes courtesy of the Metacreation Lab for Creative AI (artificial intelligence) at Simon Fraser University and, as usual for the lab, the emphasis is on music.

Music + AI Reading Group @ Mila x Vector Institute

Philippe Pasquier, Metacreation Lab director and professor, is giving a presentation on Friday, August 12, 2022 at 11 am PST (2 pm EST). Here’s more from the August 10, 2022 Metacreation Lab announcement (received via email),

Metacreaton Lab director Philippe Pasquier and PhD researcher Jeff Enns will be presenting next week [tomorrow on August 12 ,2022] at the Music + AI Reading Group hosted by Mila. The presentation will be available as a Zoom meeting. 

Mila is a community of more than 900 researchers specializing in machine learning and dedicated to scientific excellence and innovation. The institute is recognized for its expertise and significant contributions in areas such as modelling language, machine translation, object recognition and generative models.

I believe it’s also possible to view the presentation from the Music + AI Reading Group at MILA: presentation by Dr. Philippe Pasquier webpage on the Simon Fraser University website.

For anyone curious about Mila – Québec Artificial Intelligence Institute (based in Montréal) and the Vector Institute for Artificial Intelligence (based in Toronto), both are part of the Pan-Canadian Artificial Intelligence Strategy (a Canadian federal government funding initiative).

Getting back to the Music + AI Reading Group @ Mila x Vector Institute, there is an invitation to join the group which meets every Friday at 2 pm EST, from the Google group page,

unread,Feb 24, 2022, 2:47:23 PMto Community Announcements🎹🧠🚨Online Music + AI Reading Group @ Mila x Vector Institute 🎹🧠🚨

Dear members of the ISMIR [International Society for Music Information Retrieval] Community,

Together with fellow researchers at Mila (the Québec AI Institute) in Montréal, canada [sic], we have the pleasure of inviting you to join the Music + AI Reading Group @ Mila x Vector Institute. Our reading group gathers every Friday at 2pm Eastern Time. Our purpose is to build an interdisciplinary forum of researchers, students and professors alike, across industry and academia, working at the intersection of Music and Machine Learning. 

During each meeting, a speaker presents a research paper of their choice during 45’, leaving 15 minutes for questions and discussion. The purpose of the reading group is to :
– Gather a group of Music+AI/HCI [human-computer interface]/others people to share their research, build collaborations, and meet peer students. We are not constrained to any specific research directions, and all people are welcome to contribute.
– People share research ideas and brainstorm with others.
– Researchers not actively working on music-related topics but interested in the field can join and keep up with the latest research in the area, sharing their thoughts and bringing in their own backgrounds.

Our topics of interest cover (beware : the list is not exhaustive !) :
🎹 Music Generation
🧠 Music Understanding
📇 Music Recommendation
🗣  Source Separation and Instrument Recognition
🎛  Acoustics
🗿 Digital Humanities …
🙌  … and more (we are waiting for you :]) !


If you wish to attend one of our upcoming meetings, simply join our Google Group : https://groups.google.com/g/music_reading_group. You will automatically subscribe to our weekly mailing list and be able to contact other members of the group.

Here is the link to our Youtube Channel where you’ll find recordings of our past meetings : https://www.youtube.com/channel/UCdrzCFRsIFGw2fiItAk5_Og.
Here are general information about the reading group (presentation slides) : https://docs.google.com/presentation/d/1zkqooIksXDuD4rI2wVXiXZQmXXiAedtsAqcicgiNYLY/edit?usp=sharing.

Finally, if you would like to contribute and give a talk about your own research, feel free to fill in the following spreadhseet in the slot of your choice ! —> https://docs.google.com/spreadsheets/d/1skb83P8I30XHmjnmyEbPAboy3Lrtavt_jHrD-9Q5U44/edit?usp=sharing

Bravo to the two student organizers for putting this together!

Calliope Composition Environment for music makers

From the August 10, 2022 Metacreation Lab announcement,

Calling all music makers! We’d like to share some exciting news on one of the latest music creation tools from its creators, and   .

Calliope is an interactive environment based on MMM for symbolic music generation in computer-assisted composition. Using this environment, the user can generate or regenerate symbolic music from a “seed” MIDI file by using a practical and easy-to-use graphical user interface (GUI). Through MIDI streaming, the  system can interface with your favourite DAW (Digital Audio Workstation) such as Ableton Live, allowing creators to combine the possibilities of generative composition with their preferred virtual instruments sound design environments.

The project has now entered an open beta-testing phase, and inviting music creators to try the compositional system on their own! Head to the metacreation website to learn more and register for the beta testing.

Learn More About Calliope Here

You can also listen to a Calliope piece “the synthrider,” an Italo-disco fantasy of a machine, by Philippe Pasquier and Renaud Bougueng Tchemeube for the 2022 AI Song Contest.

3rd Conference on AI Music Creativity (AIMC 2022)

This in an online conference and it’s free but you do have to register. From the August 10, 2022 Metacreation Lab announcement,

Registration has opened  for the 3rd Conference on AI Music Creativity (AIMC 2022), which will be held 13-15 September, 2022. The conference features 22 accepted papers, 14 music works, and 2 workshops. Registered participants will get full access to the scientific and artistic program, as well as conference workshops and virtual social events. 

The full conference program is now available online

Registration, free but mandatory, is available here:

Free Registration for AIMC 2022 

The conference theme is “The Sound of Future Past — Colliding AI with Music Tradition” and I noticed that a number of the organizers are based in Japan. Often, the organizers’ home country gets some extra time in the spotlight, which is what makes these international conferences so interesting and valuable.

Autolume Live

This concerns generative adversarial networks (GANs) and a paper proposing “… Autolume-Live, the first GAN-based live VJing-system for controllable video generation.”

Here’s more from the August 10, 2022 Metacreation Lab announcement,

Jonas Kraasch & Phiippe Pasquier recently presented their latest work on the Autolume system at xCoAx, the 10th annual Conference on Computation, Communication, Aesthetics & X. Their paper is an in-depth exploration of the ways that creative artificial intelligence is increasingly used to generate static and animated visuals. 

While there are a host of systems to generate images, videos and music videos, there is a lack of real-time video synthesisers for live music performances. To address this gap, Kraasch and Pasquier propose Autolume-Live, the first GAN-based live VJing-system for controllable video generation.

Autolume Live on xCoAx proceedings  

As these things go, the paper is readable even by nonexperts (assuming you have some tolerance for being out of your depth from time to time). Here’s an example of the text and an installation (in Kelowna, BC) from the paper, Autolume-Live: Turning GANsinto a Live VJing tool,

Due to the 2020-2022 situation surrounding COVID-19, we were unable to use
our system to accompany live performances. We have used different iterations
of Autolume-Live to create two installations. We recorded some curated sessions
and displayed them at the Distopya sound art festival in Istanbul 2021 (Dystopia
Sound and Art Festival 2021) and Light-Up Kelowna 2022 (ARTSCO 2022) [emphasis mine]. In both iterations, we let the audio mapping automatically generate the video without using any of the additional image manipulations. These installations show
that the system on its own is already able to generate interesting and responsive
visuals for a musical piece.

For the installation at the Distopya sound art festival we trained a Style-GAN2 (-ada) model on abstract paintings and rendered a video using the de-scribed Latent Space Traversal mapping. For this particular piece we ran a super-resolution model on the final video as the original video output was in 512×512 and the wanted resolution was 4k. For our piece at Light-Up Kelowna [emphasis mine] we ran Autolume-Live with the Latent Space Interpolation mapping. The display included three urban screens, which allowed us to showcase three renders at the same time. We composed a video triptych using a dataset of figure drawings, a dataset of medical sketches and to tie the two videos together a model trained on a mixture of both datasets.

I found some additional information about the installation in Kelowna (from a February 7, 2022 article in The Daily Courier),

The artwork is called ‘Autolume Acedia’.

“(It) is a hallucinatory meditation on the ancient emotion called acedia. Acedia describes a mixture of contemplative apathy, nervous nostalgia, and paralyzed angst,” the release states. “Greek monks first described this emotion two millennia ago, and it captures the paradoxical state of being simultaneously bored and anxious.”

Algorithms created the set-to-music artwork but a team of humans associated with Simon Fraser University, including Jonas Kraasch and Philippe Pasquier, was behind the project.

These are among the artistic images generated by a form of artificial intelligence now showing nightly on the exterior of the Rotary Centre for the Arts in downtown Kelowna. [downloaded from https://www.kelownadailycourier.ca/news/article_6f3cefea-886c-11ec-b239-db72e804c7d6.html]

You can find the videos used in the installation and more information on the Metacreation Lab’s Autolume Acedia webpage.

Movement and the Metacreation Lab

Here’s a walk down memory lane: Tom Calvert, a professor at Simon Fraser University (SFU) and deceased September 28, 2021, laid the groundwork for SFU’s School of Interactive Arts & Technology (SIAT) and, in particular studies in movement. From SFU’s In memory of Tom Calvert webpage,

As a researcher, Tom was most interested in computer-based tools for user interaction with multimedia systems, human figure animation, software for dance, and human-computer interaction. He made significant contributions to research in these areas resulting in the Life Forms system for human figure animation and the DanceForms system for dance choreography. These are now developed and marketed by Credo Interactive Inc., a software company of which he was CEO.

While the Metacreation Lab is largely focused on music, other fields of creativity are also studied, from the August 10, 2022 Metacreation Lab announcement,

MITACS Accelerate award – partnership with Kinetyx

We are excited to announce that the Metacreation Lab researchers will be expanding their work on motion capture and movement data thanks to a new MITACS Accelerate research award. 

The project will focus on ​​body pose estimation using Motion Capture data acquisition through a partnership with Kinetyx, a Calgary-based innovative technology firm that develops in-shoe sensor-based solutions for a broad range of sports and performance applications.

Movement Database – MoDa

On the subject of motion data and its many uses in conjunction with machine learning and AI, we invite you to check out the extensive Movement Database (MoDa), led by transdisciplinary artist and scholar Shannon Cyukendall, and AI Researcher Omid Alemi. 

Spanning a wide range of categories such as dance, affect-expressive movements, gestures, eye movements, and more, this database offers a wealth of experiments and captured data available in a variety of formats.

Explore the MoDa Database

MITACS (originally a federal government mathematics-focused Network Centre for Excellence) is now a funding agency (most of the funds they distribute come from the federal government) for innovation.

As for the Calgary-based company (in the province of Alberta for those unfamiliar with Canadian geography), here they are in their own words (from the Kinetyx About webpage),

Kinetyx® is a diverse group of talented engineers, designers, scientists, biomechanists, communicators, and creators, along with an energy trader, and a medical doctor that all bring a unique perspective to our team. A love of movement and the science within is the norm for the team, and we’re encouraged to put our sensory insoles to good use. We work closely together to make movement mean something.

We’re working towards a future where movement is imperceptibly quantified and indispensably communicated with insights that inspire action. We’re developing sensory insoles that collect high-fidelity data where the foot and ground intersect. Capturing laboratory quality data, out in the real world, unlocking entirely new ways to train, study, compete, and play. The insights we provide will unlock unparalleled performance, increase athletic longevity, and provide a clear path to return from injury. We transform lives by empowering our growing community to remain moved.

We believe that high quality data is essential for us to have a meaningful place in the Movement Metaverse [1]. Our team of engineers, sport scientists, and developers work incredibly hard to ensure that our insoles and the insights we gather from them will meet or exceed customer expectations. The forces that are created and experienced while standing, walking, running, and jumping are inferred by many wearables, but our sensory insoles allow us to measure, in real-time, what’s happening at the foot-ground intersection. Measurements of force and power in addition to other traditional gait metrics, will provide a clear picture of a part of the Kinesome [2] that has been inaccessible for too long. Our user interface will distill enormous amounts of data into meaningful insights that will lead to positive behavioral change. 

[1] The Movement Metaverse is the collection of ever-evolving immersive experiences that seamlessly span both the physical and virtual worlds with unprecedented interoperability.

[2] Kinesome is the dynamic characterization and quantification encoded in an individual’s movement and activity. Broadly; an individual’s unique and dynamic movement profile. View the kinesome nft. [Note: Was not able to successfully open link as of August 11, 2022)

“… make movement mean something … .” Really?

The reference to “… energy trader …” had me puzzled but an August 11, 2022 Google search at 11:53 am PST unearthed this,

An energy trader is a finance professional who manages the sales of valuable energy resources like gas, oil, or petroleum. An energy trader is expected to handle energy production and financial matters in such a fast-paced workplace.May 16, 2022

Perhaps a new meaning for the term is emerging?

AI and visual art show in Vancouver (Canada)

The Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” is running March 5, 2022 – October 23, 2022. Should you be interested in an exhaustive examination of the exhibit and more, I have a two-part commentary: Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects and Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations.

Enjoy the show and/or the commentary, as well as, any other of the events and opportunities listed in this post.

We have math neurons and singing neurons?

According to the two items I have here, the answer is: yes, we have neurons that are specific to math and to the sound of singing.

Math neurons

A February 14, 2022 news item on ScienceDaily explains how specific the math neurons are,

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn [both in Germany]. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions. They do not care whether the calculation instruction is written down as a word or a symbol. The results have now been published in the journal Current Biology.

Using ultrafine electrodes – implanted in the temporal lobes of epilepsy patients, researchers can visualize the activity of brain regions. © Photo: Christian Burkert/Volkswagen-Stiftung/University of Bonn

A February 14, 2022 University of Bonn press release (also on EurekAlert), which originated the news item, delves further,

Most elementary school children probably already know that three apples plus two apples add up to five apples. However, what happens in the brain during such calculations is still largely unknown. The current study by the Universities of Bonn and Tübingen now sheds light on this issue.

The researchers benefited from a special feature of the Department of Epileptology at the University Hospital Bonn. It specializes in surgical procedures on the brains of people with epilepsy. In some patients, seizures always originate from the same area of the brain. In order to precisely localize this defective area, the doctors implant several electrodes into the patients. The probes can be used to precisely determine the origin of the spasm. In addition, the activity of individual neurons can be measured via the wiring.

Some neurons fire only when summing up

Five women and four men participated in the current study. They had electrodes implanted in the so-called temporal lobe of the brain to record the activity of nerve cells. Meanwhile, the participants had to perform simple arithmetic tasks. “We found that different neurons fired during additions than during subtractions,” explains Prof. Florian Mormann from the Department of Epileptology at the University Hospital Bonn.

It was not the case that some neurons responded only to a “+” sign and others only to a “-” sign: “Even when we replaced the mathematical symbols with words, the effect remained the same,” explains Esther Kutter, who is doing her doctorate in Prof. Mormann’s research group. “For example, when subjects were asked to calculate ‘5 and 3’, their addition neurons sprang back into action; whereas for ‘7 less 4,’ their subtraction neurons did.”

This shows that the cells discovered actually encode a mathematical instruction for action. The brain activity thus showed with great accuracy what kind of tasks the test subjects were currently calculating: The researchers fed the cells’ activity patterns into a self-learning computer program. At the same time, they told the software whether the subjects were currently calculating a sum or a difference. When the algorithm was confronted with new activity data after this training phase, it was able to accurately identify during which computational operation it had been recorded.

Prof. Andreas Nieder from the University of Tübingen supervised the study together with Prof. Mormann. “We know from experiments with monkeys that neurons specific to certain computational rules also exist in their brains,” he says. “In humans, however, there is hardly any data in this regard.” During their analysis, the two working groups came across an interesting phenomenon: One of the brain regions studied was the so-called parahippocampal cortex. There, too, the researchers found nerve cells that fired specifically during addition or subtraction. However, when summing up, different addition neurons became alternately active during one and the same arithmetic task. Figuratively speaking, it is as if the plus key on the calculator were constantly changing its location. It was the same with subtraction. Researchers also refer to this as “dynamic coding.”

“This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers,” stresses Mormann. The two teams from Bonn and Tübingen now want to investigate exactly what role the nerve cells found play in this.

Funding:

The study was funded by the German Research Foundation (DFG) and the Volkswagen Foundation.

Here’s a link to and a citation for the paper,

Neuronal codes for arithmetic rule processing in the human brain by Esther F. Kutter, Jan Boström, Christian E. Elger, Andreas Nieder, Florian Mormann. Current Biology, 2022; DOI: 10.1016/j.cub.2022.01.054 Published February 14, 2022

This paper appears to be open access.

Neurons for the sounds of singing

This work from the Massachusetts Institute of Technology (MIT) according to a February 22, 2022 news item on ScienceDaily,

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

Pretty nifty, eh? As is the news release headline with its nod to a classic Hollywood musical and song, from a February 22, 2022 MIT news release (also on EurekAlert),

Singing in the brain

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

Here’s a link to and a citation for the paper,

A neural population selective for song in human auditory cortex by Sam V. Norman-Haignere, Jenelle Feather, Dana Boebinger, Peter Brunner, Anthony Ritaccio, Josh H. Mcdermott, Gerwin Schalk, Nancy Kanwisher. Current Biology, 2022 DOI: 10.1016/j.cub.2022.01.069 Published February 22, 2022.

This paper appears to be open access.

I couldn’t resist,

Protein wires for nanoelectronics

A February 24, 2022 news item on phys.org describes research into using proteins as electrical conductors,

Proteins are among the most versatile and ubiquitous biomolecules on earth. Nature uses them for everything from building tissues to regulating metabolism to defending the body against disease.

Now, a new study shows that proteins have other, largely unexplored capabilities. Under the right conditions, they can act as tiny, current-carrying wires, useful for a range human-designed nanoelectronics.

….

A February 25, 2022 Arizona State University (ASU) news release (also on EurekAlert but published February 24, 2022), which originated the news item, delves further into the intricacies of nanoelectronics (Note: Links have been removed),

In new research appearing in the journal ACS Nano, Stuart Lindsay and his colleagues show that certain proteins can act as efficient electrical conductors. In fact, these tiny protein wires may have better conductance properties than similar nanowires composed of DNA [deoxyribonucleic acid], which have already met with considerable success for a host of human applications. 

Professor Lindsay directs the Biodesign Center for Single-Molecule Biophysics. He is also professor with ASU’s Department of Physics and the School of Molecular Sciences.

Just as in the case of DNA, proteins offer many attractive properties for nanoscale electronics including stability, tunable conductance and vast information storage capacity. Although proteins had traditionally been regarded as poor conductors of electricity, all that recently changed when Lindsay and his colleagues demonstrated that a protein poised between a pair of electrodes could act as an efficient conductor of electrons.

The new research examines the phenomenon of electron transport through proteins in greater detail. The study results establish that over long distances, protein nanowires display better conductance properties than chemically-synthesized nanowires specifically designed to be conductors. In addition, proteins are self-organizing and allow for atomic-scale control of their constituent parts.

Synthetically designed protein nanowires could give rise to new ultra-tiny electronics, with potential applications for medical sensing and diagnostics, nanorobots to carry out search and destroy missions against diseases or in a new breed of ultra-tiny computer transistors. Lindsay is particularly interested in the potential of protein nanowires for use in new devices to carry out ultra-fast DNA and protein sequencing, an area in which he has already made significant strides.

In addition to their role in nanoelectronic devices, charge transport reactions are crucial in living systems for processes including respiration, metabolism and photosynthesis. Hence, research into transport properties through designed proteins may shed new light on how such processes operate within living organisms.

While proteins have many of the benefits of DNA for nanoelectronics in terms of electrical conductance and self-assembly, the expanded alphabet of 20 amino acids used to construct them offers an enhanced toolkit for nanoarchitects like Lindsay, when compared with just four nucleotides making up DNA.

Transit Authority

Though electron transport has been a focus of considerable research, the nature of the flow of electrons through proteins has remained something of a mystery. Broadly speaking, the process can occur through electron tunneling, a quantum effect occurring over very short distances or through the hopping of electrons along a peptide chain—in the case of proteins, a chain of amino acids.

One objective of the study was to determine which of these regimes seemed to be operating by making quantitative measurements of electrical conductance over different lengths of protein nanowire. The study also describes a mathematical model that can be used to calculate the molecular-electronic properties of proteins.

For the experiments, the researchers used protein segments in four nanometer increments, ranging from 4-20 nanometers in length. A gene was designed to produce these amino acid sequences from a DNA template, with the protein lengths then bonded together into longer molecules. A highly sensitive instrument known as a scanning tunneling microscope was used to make precise measurements of conductance as electron transport progressed through the protein nanowire.

The data show that conductance decreases over nanowire length in a manner consistent with hopping rather than tunneling behavior of the electrons. Specific aromatic amino acid residues, (six tyrosines and one tryptophan in each corkscrew twist of the protein), help guide the electrons along their path from point to point like successive stations along a train route. “The electron transport is sort of like skipping stone across water—the stone hasn’t got time to sink on each skip,” Lindsay says.

Wire wonders

While the conductance values of the protein nanowires decreased over distance, they did so more gradually than with conventional molecular wires specifically designed to be efficient conductors.

When the protein nanowires exceeded six nanometers in length, their conductance outperformed molecular nanowires, opening the door to their use in many new applications. The fact that they can be subtly designed and altered with atomic scale control and self-assembled from a gene template permits fine-tuned manipulations that far exceed what can currently be achieved with conventional transistor design.

One exciting possibility is using such protein nanowires to connect other components in a new suite of nanomachines. For example, nanowires could be used to connect an enzyme known as a DNA polymerase to electrodes, resulting in a device that could potentially sequence an entire human genome at low cost in under an hour. A similar approach could allow the integration of proteosomes into nanoelectronic devices able to read amino acids for protein sequencing.

“We are beginning now to understand the electron transport in these proteins. Once you have quantitative calculations, not only do you have great molecular electronic components, but you have a recipe for designing them,” Lindsay says. “If you think of the SPICE program that electrical engineers use to design circuits, there’s a glimmer now that you could get this for protein electronics.”

Here’s a link to and a citation for the paper,

Electronic Transport in Molecular Wires of Precisely Controlled Length Built from Modular Proteins by Bintian Zhang, Eathen Ryan, Xu Wang, Weisi Song, and Stuart Lindsay. ACS Nano 2022, 16, 1, 1671–1680 DOI: https://doi.org/10.1021/acsnano.1c10830 Publication Date:January 14, 2022 Copyright © 2022 American Chemical Society

This paper is behind a paywall.

Documentary “NNI Retrospective Video: Creating a National Initiative” celebrates the US National Nanotechnology Initiative (NNI) and a lipid nanoparticle question

i stumbled across an August 4, 2022 tvworldwide.com news release about a video celbrating the US National Nanotechnology Initiative’s (NNI) over 20 years of operation, (Note: A link has been removed),

TV Worldwide, since 1999, a pioneering web-based global TV network, announced that it was releasing a video trailer highlighting a previously released documentary on NNI over the past 20 years, entitled, ‘NNI Retrospective Video: Creating a National Initiative’.

The video and its trailer were produced in cooperation with the National Nanotechnology Initiative (NNI), the National Science Foundation and the University of North Carolina Greensboro.

Video Documentary Synopsis

Nanotechnology is a megatrend in science and technology at the beginning of the 21 Century. The National Nanotechnology Initiative (NNI) has played a key role in advancing the field after it was announced by President Clinton in January 2000. Neil Lane was Presidential Science Advisor. Mike Roco proposed the initiative at the White House in March 1999 on behalf of the Interagency Working Group on Nanotechnology and was named the founding Chair of NSET to implement NNI beginning with Oct. 2000. NSF led the preparation of this initiative together with other agencies including NIH, DoD, DOE, NASA, and EPA. Jim Murday was named the first Director of NNCO to support NSET. The scientific and societal success of NNI has been recognized in the professional communities, National Academies, PCAST, and Congress. Nanoscale science, engineering and technology are strongly connected and collectively called Nanotechnology.

This video documentary was made after the 20th NNI grantees conference at NSF. It is focused on creating and implementing NNI, through video interviews. The interviews focused on three questions: (a) Motivation and how NNI started; (b) The process and reason for the success in creating NNI; (c) Outcomes of NNI after 20 years, and how the initial vision has been realized.

About the National Nanotechnology Initiative (NNI)

The National Nanotechnology Initiative (NNI) is a U.S. Government research and development (R&D) initiative. Over thirty Federal departments, independent agencies, and commissions work together toward the shared vision of a future in which the ability to understand and control matter at the nanoscale leads to ongoing revolutions in technology and industry that benefit society. The NNI enhances interagency coordination of nanotechnology R&D, supports a shared infrastructure, enables leveraging of resources while avoiding duplication, and establishes shared goals, priorities, and strategies that complement agency-specific missions and activities.

The NNI participating agencies work together to advance discovery and innovation across the nanotechnology R&D enterprise. The NNI portfolio encompasses efforts along the entire technology development pathway, from early-stage fundamental science through applications-driven activities. Nanoscience and nanotechnology are prevalent across the R&D landscape, with an ever-growing list of applications that includes nanomedicine, nanoelectronics, water treatment, precision agriculture, transportation, and energy generation and storage. The NNI brings together representatives from multiple agencies to leverage knowledge and resources and to collaborate with academia and the private sector, as appropriate, to promote technology transfer and facilitate commercialization. The breadth of NNI-supported infrastructure enables not only the nanotechnology community but also researchers from related disciplines.

In addition to R&D efforts, the NNI is helping to build the nanotechnology workforce of the future, with focused efforts from K–12 through postgraduate research training. The responsible development of nanotechnology has been an integral pillar of the NNI since its inception, and the initiative proactively considers potential implications and technology applications at the same time. Collectively, these activities ensure that the United States remains not only the place where nanoscience discoveries are made, but also where these discoveries are translated and manufactured into products to benefit society.

I’m embedding the trailer here and a lipid nanoparticle question follows (The origin story told in Vancouver [Canada] is that the work was started at the University of British Columbia by Pieter Quilty.),

I was curious about what involvement the US NNI had with the development of lipid nanoparticles (LNPs) and found a possible answer to that question on Wikipedia The LNP Wikipedia entry certainly gives the bulk of the credit to Quilty but there was work done prior to his involvement (Note: Links have been removed),

A significant obstacle to using LNPs as a delivery vehicle for nucleic acids is that in nature, lipids and nucleic acids both carry a negative electric charge—meaning they do not easily mix with each other.[19] While working at Syntex in the mid-1980s,[20] Philip Felgner [emphasis mine] pioneered the use of artificially-created cationic lipids (positively-charged lipids) to bind lipids to nucleic acids in order to transfect the latter into cells.[21] However, by the late 1990s, it was known from in vitro experiments that this use of cationic lipids had undesired side effects on cell membranes.[22]

During the late 1990s and 2000s, Pieter Cullis of the University of British Columbia [emphasis mine] developed ionizable cationic lipids which are “positively charged at an acidic pH but neutral in the blood.”[8] Cullis also led the development of a technique involving careful adjustments to pH during the process of mixing ingredients in order to create LNPs which could safely pass through the cell membranes of living organisms.[19][23] As of 2021, the current understanding of LNPs formulated with such ionizable cationic lipids is that they enter cells through receptor-mediated endocytosis and end up inside endosomes.[8] The acidity inside the endosomes causes LNPs’ ionizable cationic lipids to acquire a positive charge, and this is thought to allow LNPs to escape from endosomes and release their RNA payloads.[8]

From 2005 into the early 2010s, LNPs were investigated as a drug delivery system for small interfering RNA (siRNA) drugs.[8] In 2009, Cullis co-founded a company called Acuitas Therapeutics to commercialize his LNP research [emphasis mine]; Acuitas worked on developing LNPs for Alnylam Pharmaceuticals’s siRNA drugs.[24] In 2018, the FDA approved Alnylam’s siRNA drug Onpattro (patisiran), the first drug to use LNPs as the drug delivery system.[3][8]

By that point in time, siRNA drug developers like Alnylam were already looking at other options for future drugs like chemical conjugate systems, but during the 2010s, the earlier research into using LNPs for siRNA became a foundation for new research into using LNPs for mRNA.[8] Lipids intended for short siRNA strands did not work well for much longer mRNA strands, which led to extensive research during the mid-2010s into the creation of novel ionizable cationic lipids appropriate for mRNA.[8] As of late 2020, several mRNA vaccines for SARS-CoV-2 use LNPs as their drug delivery system, including both the Moderna COVID-19 vaccine and the Pfizer–BioNTech COVID-19 vaccines.[3] Moderna uses its own proprietary ionizable cationic lipid called SM-102, while Pfizer and BioNTech licensed an ionizable cationic lipid called ALC-0315 from Acuitas.[8] [emphases mine]

You can find out more about Philip Felgner here on his University of California at Irvine (UCI) profile page.

I wish they had been a little more careful about some of the claims that Thomas Kalil made about lipid nanoparticles in both the trailer and video but, getting back to the trailer (approx. 3 mins.) and the full video (approx. 25 mins.), either provides insight into a quite extraordinary effort.

Bravo to the US NNI!

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

it was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects

To my imaginary AI friend

Dear friend,

I thought you might be amused by these Roomba-like* paintbots at the Vancouver Art Gallery’s (VAG) latest exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022).

Sougwen Chung, Omnia per Omnia, 2018, video (excerpt), Courtesy of the Artist

*A Roomba is a robot vacuum cleaner produced and sold by iRobot.

As far as I know, this is the Vancouver Art Gallery’s first art/science or art/technology exhibit and it is an alternately fascinating, exciting, and frustrating take on artificial intelligence and its impact on the visual arts. Curated by Bruce Grenville, VAG Senior Curator, and Glenn Entis, Guest Curator, the show features 20 ‘objects’ designed to both introduce viewers to the ‘imitation game’ and to challenge them. From the VAG Imitation Game webpage,

The Imitation Game surveys the extraordinary uses (and abuses) of artificial intelligence (AI) in the production of modern and contemporary visual culture around the world. The exhibition follows a chronological narrative that first examines the development of artificial intelligence, from the 1950s to the present [emphasis mine], through a precise historical lens. Building on this foundation, it emphasizes the explosive growth of AI across disciplines, including animation, architecture, art, fashion, graphic design, urban design and video games, over the past decade. Revolving around the important roles of machine learning and computer vision in AI research and experimentation, The Imitation Game reveals the complex nature of this new tool and demonstrates its importance for cultural production.

And now …

As you’ve probably guessed, my friend, you’ll find a combination of both background information and commentary on the show.

I’ve initially focused on two people (a scientist and a mathematician) who were seminal thinkers about machines, intelligence, creativity, and humanity. I’ve also provided some information about the curators, which hopefully gives you some insight into the show.

As for the show itself, you’ll find a few of the ‘objects’ highlighted with one of them being investigated at more length. The curators devoted some of the show to ethical and social justice issues, accordingly, the Vancouver Art Gallery hosted the University of British Columbia’s “Speculative Futures: Artificial Intelligence Symposium” on April 7, 2022,

Presented in conjunction with the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, the Speculative Futures Symposium examines artificial intelligence and the specific uses of technology in its multifarious dimensions. Across four different panel conversations, leading thinkers of today will explore the ethical implications of technology and discuss how they are working to address these issues in cultural production.”

So, you’ll find more on these topics here too.

And for anyone else reading this (not you, my friend who is ‘strong’ AI and not similar to the ‘weak’ AI found in this show), there is a description of ‘weak’ and ‘strong’ AI on the avtsim.com/weak-ai-strong-ai webpage, Note: A link has been removed,

There are two types of AI: weak AI and strong AI.

Weak, sometimes called narrow, AI is less intelligent as it cannot work without human interaction and focuses on a more narrow, specific, or niched purpose. …

Strong AI on the other hand is in fact comparable to the fictitious AIs we see in media like the terminator. The theoretical Strong AI would be equivalent or greater to human intelligence.

….

My dear friend, I hope you will enjoy.

The Imitation Game and ‘mad, bad, and dangerous to know’

In some circles, it’s better known as ‘The Turing Test;” the Vancouver Art Gallery’s ‘Imitation Game’ hosts a copy of Alan Turing’s foundational paper for establishing whether artificial intelligence is possible (I thought this was pretty exciting).

Here’s more from The Turing Test essay by Graham Oppy and David Dowe for the Stanford Encyclopedia of Philosophy,

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing’s eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in the Imitation Game.

The phrase “The Turing Test” is sometimes used more generally to refer to some kinds of behavioural tests for the presence of mind, or thought, or intelligence in putatively minded entities. …

Next to the display holding Turing’s paper, is another display with an excerpt of an explanation from Turing about how he believed Ada Lovelace would have responded to the idea that machines could think based on a copy of some of her writing (also on display). She proposed that creativity, not thinking, is what set people apart from machines. (See the April 17, 2020 article “Thinking Machines? Has the Lovelace Test Been Passed?’ on mindmatters.ai.)

It’s like a dialogue between two seminal thinkers who lived about 100 years apart; Lovelace, born in 1815 and dead in 1852, and Turing, born in 1912 and dead in 1954. Both have fascinating back stories (more about those later) and both played roles in how computers and artificial intelligence are viewed.

Adding some interest to this walk down memory lane is a 3rd display, an illustration of the ‘Mechanical Turk‘, a chess playing machine that made the rounds in Europe from 1770 until it was destroyed in 1854. A hoax that fooled people for quite a while it is a reminder that we’ve been interested in intelligent machines for centuries. (Friend, Turing and Lovelace and the Mechanical Turk are found in Pod 1.)

Back story: Turing and the apple

Turing is credited with being instrumental in breaking the German ENIGMA code during World War II and helping to end the war. I find it odd that he ended up at the University of Manchester in the post-war years. One would expect him to have been at Oxford or Cambridge. At any rate, he died in 1954 of cyanide poisoning two years after he was arrested for being homosexual and convicted of indecency. Given the choice of incarceration or chemical castration, he chose the latter. There is, to this day, debate about whether or not it was suicide. Here’s how his death is described in this Wikipedia entry (Note: Links have been removed),

On 8 June 1954, at his house at 43 Adlington Road, Wilmslow,[150] Turing’s housekeeper found him dead. He had died the previous day at the age of 41. Cyanide poisoning was established as the cause of death.[151] When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide,[152] it was speculated that this was the means by which Turing had consumed a fatal dose. An inquest determined that he had committed suicide. Andrew Hodges and another biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt’s words) he took “an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew”.[153] Turing’s remains were cremated at Woking Crematorium on 12 June 1954,[154] and his ashes were scattered in the gardens of the crematorium, just as his father’s had been.[155]

Philosopher Jack Copeland has questioned various aspects of the coroner’s historical verdict. He suggested an alternative explanation for the cause of Turing’s death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten.[156] Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) “with good humour” and had shown no sign of despondency prior to his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend.[156] Turing’s mother believed that the ingestion was accidental, resulting from her son’s careless storage of laboratory chemicals.[157] Biographer Andrew Hodges theorised that Turing arranged the delivery of the equipment to deliberately allow his mother plausible deniability with regard to any suicide claims.[158]

The US Central Intelligence Agency (CIA) also has an entry for Alan Turing dated April 10, 2015 it’s titled, The Enigma of Alan Turing.

Back story: Ada Byron Lovelace, the 2nd generation of ‘mad, bad, and dangerous to know’

A mathematician and genius in her own right, Ada Lovelace’s father George Gordon Byron, better known as the poet Lord Byron, was notoriously described as ‘mad, bad, and dangerous to know’.

Lovelace too could have been been ‘mad, bad, …’ but she is described less memorably as “… manipulative and aggressive, a drug addict, a gambler and an adulteress, …” as mentioned in my October 13, 20215 posting. It marked the 200th anniversary of her birth, which was celebrated with a British Broadcasting Corporation (BBC) documentary and an exhibit at the Science Museum in London, UK.

She belongs in the Vancouver Art Gallery’s show along with Alan Turing due to her prediction that computers could be made to create music. She also published the first computer program. Her feat is astonishing when you know only one working model {1/7th of the proposed final size) of a computer was ever produced. (The machine invented by Charles Babbage was known as a difference engine. You can find out more about the Difference engine on Wikipedia and about Babbage’s proposed second invention, the Analytical engine.)

(Byron had almost nothing to do with his daughter although his reputation seems to have dogged her. You can find out more about Lord Byron here.)

AI and visual culture at the VAG: the curators

As mentioned earlier, the VAG’s “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” show runs from March 5, 2022 – October 23, 2022. Twice now, I have been to this weirdly exciting and frustrating show.

Bruce Grenville, VAG Chief/Senior Curator, seems to specialize in pulling together diverse materials to illustrate ‘big’ topics. His profile for Emily Carr University of Art + Design (where Grenville teaches) mentions these shows ,

… He has organized many thematic group exhibitions including, MashUp: The Birth of Modern Culture [emphasis mine], a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century; KRAZY! The Delirious World [emphasis mine] of Anime + Manga + Video Games + Art, a timely and important survey of modern and contemporary visual culture from around the world; Home and Away: Crossing Cultures on the Pacific Rim [emphasis mine] a look at the work of six artists from Vancouver, Beijing, Ho Chi Minh City, Seoul and Los Angeles, who share a history of emigration and diaspora. …

Glenn Entis, Guest Curator and founding faculty member of Vancouver’s Centre for Digital Media (CDM) is Grenville’s co-curator, from Entis’ CDM profile,

“… an Academy Award-winning animation pioneer and games industry veteran. The former CEO of Dreamworks Interactive, Glenn worked with Steven Spielberg and Jeffrey Katzenberg on a number of video games …,”

Steve Newton in his March 4, 2022 preview does a good job of describing the show although I strongly disagree with the title of his article which proclaims “The Vancouver Art Gallery takes a deep dive into artificial intelligence with The Imitation Game.” I think it’s more of a shallow dive meant to cover more distance than depth,

… The exhibition kicks off with an interactive introduction inviting visitors to actively identify diverse areas of cultural production influenced by AI.

“That was actually one of the pieces that we produced in collaboration with the Centre for Digital Media,” Grenville notes, “so we worked with some graduate-student teams that had actually helped us to design that software. It was the beginning of COVID when we started to design this, so we actually wanted a no-touch interactive. So, really, the idea was to say, ‘Okay, this is the very entrance to the exhibition, and artificial intelligence, this is something I’ve heard about, but I’m not really sure how it’s utilized in ways. But maybe I know something about architecture; maybe I know something about video games; maybe I know something about the history of film.

“So you point to these 10 categories of visual culture [emphasis mine]–video games, architecture, fashion design, graphic design, industrial design, urban design–so you point to one of those, and you might point to ‘film’, and then when you point at it that opens up into five different examples of what’s in the show, so it could be 2001: A Space Odyssey, or Bladerunner, or World on a Wire.”

After the exhibition’s introduction—which Grenville equates to “opening the door to your curiosity” about artificial intelligence–visitors encounter one of its main categories, Objects of Wonder, which speaks to the history of AI and the critical advances the technology has made over the years.

“So there are 20 Objects of Wonder [emphasis mine],” Grenville says, “which go from 1949 to 2022, and they kind of plot out the history of artificial intelligence over that period of time, focusing on a specific object. Like [mathematician and philosopher] Norbert Wiener made this cybernetic creature, he called it a ‘Moth’, in 1949. So there’s a section that looks at this idea of kind of using animals–well, machine animals–and thinking about cybernetics, this idea of communication as feedback, early thinking around neuroscience and how neuroscience starts to imagine this idea of a thinking machine.

And there’s this from Newton’s March 4, 2022 preview,

“It’s interesting,” Grenville ponders, “artificial intelligence is virtually unregulated. [emphasis mine] You know, if you think about the regulatory bodies that govern TV or radio or all the types of telecommunications, there’s no equivalent for artificial intelligence, which really doesn’t make any sense. And so what happens is, sometimes with the best intentions [emphasis mine]—sometimes not with the best intentions—choices are made about how artificial intelligence develops. So one of the big ones is facial-recognition software [emphasis mine], and any body-detection software that’s being utilized.

In addition to it being the best overview of the show I’ve seen so far, this is the only one where you get a little insight into what the curators were thinking when they were developing it.

A deep dive into AI?

it was only while searching for a little information before the show that I realized I don’t have any definitions for artificial intelligence! What is AI? Sadly, there are no definitions of AI in the exhibit.

It seems even experts don’t have a good definition. Take a look at this,

The definition of AI is fluid [emphasis mine] and reflects a constantly shifting landscape marked by technological advancements and growing areas of application. Indeed, it has frequently been observed that once AI becomes capable of solving a particular problem or accomplishing a certain task, it is often no longer considered to be “real” intelligence [emphasis mine] (Haenlein & Kaplan, 2019). A firm definition was not applied for this report [emphasis mine], given the variety of implementations described above. However, for the purposes of deliberation, the Panel chose to interpret AI as a collection of statistical and software techniques, as well as the associated data and the social context in which they evolve — this allows for a broader and more inclusive interpretation of AI technologies and forms of agency. The Panel uses the term AI interchangeably to describe various implementations of machine-assisted design and discovery, including those based on machine learning, deep learning, and reinforcement learning, except for specific examples where the choice of implementation is salient. [p. 6 print version; p. 34 PDF version]

The above is from the Leaps and Boundaries report released May 10, 2022 by the Council of Canadian Academies’ Expert Panel on Artificial Intelligence for Science and Engineering.

Sometimes a show will take you in an unexpected direction. I feel a lot better ‘not knowing’. Still, I wish the curators had acknowledged somewhere in the show that artificial intelligence is a slippery concept. Especially when you add in robots and automatons. (more about them later)

21st century technology in a 19th/20th century building

Void stairs inside the building. Completed in 1906, the building was later designated as a National Historic Site in 1980 [downloaded from https://en.wikipedia.org/wiki/Vancouver_Art_Gallery#cite_note-canen-7]

Just barely making it into the 20th century, the building where the Vancouver Art Gallery currently resides was for many years the provincial courthouse (1911 – 1978). In some ways, it’s a disconcerting setting for this show.

They’ve done their best to make the upstairs where the exhibit is displayed look like today’s galleries with their ‘white cube aesthetic’ and strong resemblance to the scientific laboratories seen in movies.

(For more about the dominance, since the 1930s, of the ‘white cube aesthetic’ in art galleries around the world, see my July 26, 2021 posting; scroll down about 50% of the way.)

It makes for an interesting tension, the contrast between the grand staircase, the cupola, and other architectural elements and the sterile, ‘laboratory’ environment of the modern art gallery.

20 Objects of Wonder and the flow of the show

It was flummoxing. Where are the 20 objects? Why does it feel like a maze in a laboratory? Loved the bees, but why? Eeeek Creepers! What is visual culture anyway? Where am I?

The objects of the show

It turns out that the curators have a more refined concept for ‘object’ than I do. There weren’t 20 material objects, there were 20 numbered ‘pods’ with perhaps a screen or a couple of screens or a screen and a material object or two illustrating the pod’s topic.

Looking up a definition for the word (accessed from a June 9, 2022 duckduckgo.com search). yielded this, (the second one seems à propos),

objectŏb′jĭkt, -jĕkt″

noun

1. Something perceptible by one or more of the senses, especially by vision or touch; a material thing.

2. A focus of attention, feeling, thought, or action.

3. A limiting factor that must be considered.

The American Heritage® Dictionary of the English Language, 5th Edition.

Each pod = a focus of attention.

The show’s flow is a maze. Am I a rat?

The pods are defined by a number and by temporary walls. So if you look up, you’ll see a number and a space partly enclosed by a temporary wall or two.

It’s a very choppy experience. For example, one minute you can be in pod 1 and, when you turn the corner, you’re in pod 4 or 5 or ? There are pods I’ve not seen, despite my two visits, because I kept losing my way. This led to an existential crisis on my second visit. “Had I missed the greater meaning of this show? Was there some sort of logic to how it was organized? Was there meaning to my life? Was I a rat being nudged around in a maze?” I didn’t know.

Thankfully, I have since recovered. But, I will return to my existential crisis later, with a special mention for “Creepers.”

The fascinating

My friend, you know I appreciated the history and in addition to Alan Turing, Ada Lovelace and the Mechanical Turk, at the beginning of the show, they included a reference to Ovid (or Pūblius Ovidius Nāsō), a Roman poet who lived from 43 BCE – 17/18 CE in one of the double digit (17? or 10? or …) in one of the pods featuring a robot on screen. As to why Ovid might be included, this excerpt from a February 12, 2018 posting on the cosmolocal.org website provides a clue (Note. Links have been removed),

The University of King’s College [Halifax, Nova Scotia] presents Automatons! From Ovid to AI, a nine-lecture series examining the history, issues and relationships between humans, robots, and artificial intelligence [emphasis mine]. The series runs from January 10 to April 4 [2018], and features leading scholars, performers and critics from Canada, the US and Britain.

“Drawing from theatre, literature, art, science and philosophy, our 2018 King’s College Lecture Series features leading international authorities exploring our intimate relationships with machines,” says Dr. Gordon McOuat, professor in the King’s History of Science and Technology (HOST) and Contemporary Studies Programs.

“From the myths of Ovid [emphasis mine] and the automatons [emphasis mine] of the early modern period to the rise of robots, cyborgs, AI and artificial living things in the modern world, the 2018 King’s College Lecture Series examines the historical, cultural, scientific and philosophical place of automatons in our lives—and our future,” adds McOuat.

I loved the way the curators managed to integrate the historical roots for artificial intelligence and, by extension, the world of automatons, robots, cyborgs, and androids. Yes, starting the show with Alan Turing and Ada Lovelace could be expected but Norbert Wiener’s Moth (1949) acts as a sort of preview for Sougwen Chung’s “Omnia per Omnia, 2018” (GIF seen at the beginning of this post). Take a look for yourself (from the cyberneticzoo.com September 19, 2009 posting by cyberne1. Do you see the similarity or am I the only one?

[sourced from Google images, Source:life) & downloaded from https://cyberneticzoo.com/cyberneticanimals/1949-wieners-moth-wiener-wiesner-singleton/]

Sculpture

This is the first time I’ve come across an AI/sculpture project. The VAG show features Scott Eaton’s sculptures on screens in a room devoted to his work.

Scott Eaton: Entangled II, 2019 4k video (still) Courtesy of the Artist [downloaded from https://www.vanartgallery.bc.ca/exhibitions/the-imitation-game]

This looks like an image of a piece of ginger root and It’s fascinating to watch the process as the AI agent ‘evolves’ Eaton’s drawings into onscreen sculptures. It would have enhanced the experience if at least one of Eaton’s ‘evolved’ and physically realized sculptures had been present in the room but perhaps there were financial and/or logistical reasons for the absence.

Both Chung and Eaton are collaborating with an AI agent. In Chung’s case the AI is integrated into the paintbots with which she interacts and paints alongside and in Eaton’s case, it’s via a computer screen. In both cases, the work is mildly hypnotizing in a way that reminds me of lava lamps.

One last note about Chung and her work. She was one of the artists invited to present new work at an invite-only April 22, 2022 Embodied Futures workshop at the “What will life become?” event held by the Berrgruen Institute and the University of Southern California (USC),

Embodied Futures invites participants to imagine novel forms of life, mind, and being through artistic and intellectual provocations on April 22 [2022].

Beginning at 1 p.m., together we will experience the launch of five artworks commissioned by the Berggruen Institute. We asked these artists: How does your work inflect how we think about “the human” in relation to alternative “embodiments” such as machines, AIs, plants, animals, the planet, and possible alien life forms in the cosmos? [emphases mine]  Later in the afternoon, we will take provocations generated by the morning’s panels and the art premieres in small breakout groups that will sketch futures worlds, and lively entities that might dwell there, in 2049.

This leads to (and my friend, while I too am taking a shallow dive, for this bit I’m going a little deeper):

Bees and architecture

Neri Oxman’s contribution (Golden Bee Cube, Synthetic Apiary II [2020]) is an exhibit featuring three honeycomb structures and a video featuring the bees in her synthetic apiary.

Neri Oxman and the MIT Mediated Matter Group, Golden Bee Cube, Synthetic Apiary II, 2020, beeswax, acrylic, gold particles, gold powder Courtesy of Neri Oxman and the MIT Mediated Matter Group

Neri Oxman (then a faculty member of the Mediated Matter Group at the Massachusetts Institute of Technology) described the basis for the first and all other iterations of her synthetic apiary in Patrick Lynch’s October 5, 2016 article for ‘ArchDaily; Broadcasting Architecture Worldwide’, Note: Links have been removed,

Designer and architect Neri Oxman and the Mediated Matter group have announced their latest design project: the Synthetic Apiary. Aimed at combating the massive bee colony losses that have occurred in recent years, the Synthetic Apiary explores the possibility of constructing controlled, indoor environments that would allow honeybee populations to thrive year-round.

“It is time that the inclusion of apiaries—natural or synthetic—for this “keystone species” be considered a basic requirement of any sustainability program,” says Oxman.

In developing the Synthetic Apiary, Mediated Matter studied the habits and needs of honeybees, determining the precise amounts of light, humidity and temperature required to simulate a perpetual spring environment. [emphasis mine] They then engineered an undisturbed space where bees are provided with synthetic pollen and sugared water and could be evaluated regularly for health.

In the initial experiment, the honeybees’ natural cycle proved to adapt to the new environment, as the Queen was able to successfully lay eggs in the apiary. The bees showed the ability to function normally in the environment, suggesting that natural cultivation in artificial spaces may be possible across scales, “from organism- to building-scale.”

“At the core of this project is the creation of an entirely synthetic environment enabling controlled, large-scale investigations of hives,” explain the designers.

Mediated Matter chose to research into honeybees not just because of their recent loss of habitat, but also because of their ability to work together to create their own architecture, [emphasis mine] a topic the group has explored in their ongoing research on biologically augmented digital fabrication, including employing silkworms to create objects and environments at product, architectural, and possibly urban, scales.

“The Synthetic Apiary bridges the organism- and building-scale by exploring a “keystone species”: bees. Many insect communities present collective behavior known as “swarming,” prioritizing group over individual survival, while constantly working to achieve common goals. Often, groups of these eusocial organisms leverage collaborative behavior for relatively large-scale construction. For example, ants create extremely complex networks by tunneling, wasps generate intricate paper nests with materials sourced from local areas, and bees deposit wax to build intricate hive structures.”

This January 19, 2022 article by Crown Honey for its eponymous blog updates Oxman’s work (Note 1: All emphases are mine; Note 2: A link has been removed),

Synthetic Apiary II investigates co-fabrication between humans and honey bees through the use of designed environments in which Apis mellifera colonies construct comb. These designed environments serve as a means by which to convey information to the colony. The comb that the bees construct within these environments comprises their response to the input information, enabling a form of communication through which we can begin to understand the hive’s collective actions from their perspective.

Some environments are embedded with chemical cues created through a novel pheromone 3D-printing process, while others generate magnetic fields of varying strength and direction. Others still contain geometries of varying complexity or designs that alter their form over time.

When offered wax augmented with synthetic biomarkers, bees appear to readily incorporate it into their construction process, likely due to the high energy cost of producing fresh wax. This suggests that comb construction is a responsive and dynamic process involving complex adaptations to perturbations from environmental stimuli, not merely a set of predefined behaviors building toward specific constructed forms. Each environment therefore acts as a signal that can be sent to the colony to initiate a process of co-fabrication.

Characterization of constructed comb morphology generally involves visual observation and physical measurements of structural features—methods which are limited in scale of analysis and blind to internal architecture. In contrast, the wax structures built by the colonies in Synthetic Apiary II are analyzed through high-throughput X-ray computed tomography (CT) scans that enable a more holistic digital reconstruction of the hive’s structure.

Geometric analysis of these forms provides information about the hive’s design process, preferences, and limitations when tied to the inputs, and thereby yields insights into the invisible mediations between bees and their environment.
Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them. Refined by evolution over hundreds of thousands of years, their comb-building behaviors and social organizations may reveal new forms and methods of formation that can be applied across our human endeavors in architecture, design, engineering, and culture.

Further, with a basic understanding and language established, methods of co-fabrication together with bees may be developed, enabling the use of new biocompatible materials and the creation of more efficient structural geometries that modern technology alone cannot achieve.

In this way, we also move our built environment toward a more synergistic embodiment, able to be more seamlessly integrated into natural environments through material and form, even providing habitats of benefit to both humans and nonhumans. It is essential to our mutual survival for us to not only protect but moreover to empower these critical pollinators – whose intrinsic behaviors and ecosystems we have altered through our industrial processes and practices of human-centric design – to thrive without human intervention once again.

In order to design our way out of the environmental crisis that we ourselves created, we must first learn to speak nature’s language. …

The three (natural, gold nanoparticle, and silver nanoparticle) honeycombs in the exhibit are among the few physical objects (the others being the historical documents and the paintbots with their canvasses) in the show and it’s almost a relief after the parade of screens. It’s the accompanying video that’s eerie. Everything is in white, as befits a science laboratory, in this synthetic apiary where bees are fed sugar water and fooled into a spring that is eternal.

Courtesy: Massachusetts Institute of Technology Copyright: Mediated Matter [downloaded from https://www.media.mit.edu/projects/synthetic-apiary/overview/]

(You may want to check out Lynch’s October 5, 2016 article or Crown Honey’s January 19, 2022 article as both have embedded images and the Lynch article includes a Synthetic Apiary video. The image above is a still from the video.)

As I asked a friend, where are the flowers? Ron Miksha, a bee ecologist working at the University of Calgary, details some of the problems with Oxman’s Synthetic Apiary this way in his October 7, 2016 posting on his Bad Beekeeping Blog,

In a practical sense, the synthetic apiary fails on many fronts: Bees will survive a few months on concoctions of sugar syrup and substitute pollen, but they need a natural variety of amino acids and minerals to actually thrive. They need propolis and floral pollen. They need a ceiling 100 metres high and a 2-kilometre hallway if drone and queen will mate, or they’ll die after the old queen dies. They need an artificial sun that travels across the sky, otherwise, the bees will be attracted to artificial lights and won’t return to their hive. They need flowery meadows, fresh water, open skies. [emphasis mine] They need a better holodeck.

Dorothy Woodend’s March 10, 2022 review of the VAG show for The Tyee poses other issues with the bees and the honeycombs,

When AI messes about with other species, there is something even more unsettling about the process. American-Israeli artist Neri Oxman’s Golden Bee Cube, Synthetic Apiary II, 2020 uses real bees who are proffered silver and gold [nanoparticles] to create their comb structures. While the resulting hives are indeed beautiful, rendered in shades of burnished metal, there is a quality of unease imbued in them. Is the piece akin to apiary torture chambers? I wonder how the bees feel about this collaboration and whether they’d like to renegotiate the deal.

There’s no question the honeycombs are fascinating and disturbing but I don’t understand how artificial intelligence was a key factor in either version of Oxman’s synthetic apiary. In the 2022 article by Crown Honey, there’s this “Developing computational tools to learn from bees can facilitate the very beginnings of a dialogue with them [honeybees].” It’s probable that the computational tools being referenced include AI and the Crown Honey article seems to suggest those computational tools are being used to analyze the bees behaviour after the fact.

Yes, I can imagine a future where ‘strong’ AI (such as you, my friend) is in ‘dialogue’ with the bees and making suggestions and running the experiments but it’s not clear that this is the case currently. The Oxman exhibit contribution would seem to be about the future and its possibilities whereas many of the other ‘objects’ concern the past and/or the present.

Friend, let’s take a break, shall we? Part 2 is coming up.

Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more

I received (via email) a July 21, 2022 news release about the launch of a quantum science initiative in Vancouver (BTW, I have more about the Canadian quantum scene later in this post),

World’s top physicists unite to tackle one of Science’s greatest
mysteries


Vancouver-based Quantum Gravity Society leads international quest to
discover Theory of Quantum Gravity

Vancouver, B.C. (July 21, 2022): More than two dozen of the world’s
top physicists, including three Nobel Prize winners, will gather in
Vancouver this August for a Quantum Gravity Conference that will host
the launch a Vancouver-based Quantum Gravity Institute (QGI) and a
new global research collaboration that could significantly advance our
understanding of physics and gravity and profoundly change the world as
we know it.

For roughly 100 years, the world’s understanding of physics has been
based on Albert Einstein’s General Theory of Relativity (GR), which
explored the theory of space, time and gravity, and quantum mechanics
(QM), which focuses on the behaviour of matter and light on the atomic
and subatomic scale. GR has given us a deep understanding of the cosmos,
leading to space travel and technology like atomic clocks, which govern
global GPS systems. QM is responsible for most of the equipment that
runs our world today, including the electronics, lasers, computers, cell
phones, plastics, and other technologies that support modern
transportation, communications, medicine, agriculture, energy systems
and more.

While each theory has led to countless scientific breakthroughs, in many
cases, they are incompatible and seemingly contradictory. Discovering a
unifying connection between these two fundamental theories, the elusive
Theory of Quantum Gravity, could provide the world with a deeper
understanding of time, gravity and matter and how to potentially control
them. It could also lead to new technologies that would affect most
aspects of daily life, including how we communicate, grow food, deliver
health care, transport people and goods, and produce energy.

“Discovering the Theory of Quantum Gravity could lead to the
possibility of time travel, new quantum devices, or even massive new
energy resources that produce clean energy and help us address climate
change,” said Philip Stamp, Professor, Department of Physics and
Astronomy, University of British Columbia, and Visiting Associate in
Theoretical Astrophysics at Caltech [California Institute of Technology]. “The potential long-term ramifications of this discovery are so incredible that life on earth 100
years from now could look as miraculous to us now as today’s
technology would have seemed to people living 100 years ago.”

The new Quantum Gravity Institute and the conference were founded by the
Quantum Gravity Society, which was created in 2022 by a group of
Canadian technology, business and community leaders, and leading
physicists. Among its goals are to advance the science of physics and
facilitate research on the Theory of Quantum Gravity through initiatives
such as the conference and assembling the world’s leading archive of
scientific papers and lectures associated with the attempts to reconcile
these two theories over the past century.

Attending the Quantum Gravity Conference in Vancouver (August 15-19 [2022])
will be two dozen of the world’s top physicists, including Nobel
Laureates Kip Thorne, Jim Peebles and Sir Roger Penrose, as well as
physicists Baron Martin Rees, Markus Aspelmeyer, Viatcheslav Mukhanov
and Paul Steinhardt. On Wednesday, August 17, the conference will be
open to the public, providing them with a once-in-a-lifetime opportunity
to attend keynote addresses from the world’s pre-eminent physicists.
… A noon-hour discussion on the importance of the
research will be delivered by Kip Thorne, the former Feynman Professor
of physics at Caltech. Thorne is well known for his popular books, and
for developing the original idea for the 2014 film “Interstellar.” He
was also crucial to the development of the book “Contact” by Carl Sagan,
which was also made into a motion picture.

“We look forward to welcoming many of the world’s brightest minds to
Vancouver for our first Quantum Gravity Conference,” said Frank
Giustra, CEO Fiore Group and Co-Founder, Quantum Gravity Society. “One
of the goals of our Society will be to establish Vancouver as a
supportive home base for research and facilitate the scientific
collaboration that will be required to unlock this mystery that has
eluded some of the world’s most brilliant physicists for so long.”

“The format is key,” explains Terry Hui, UC Berkley Physics alumnus
and Co-Founder, Quantum Gravity Society [and CEO of Concord Pacific].
“Like the Solvay Conference nearly 100 years ago, the Quantum Gravity
Conference will bring top scientists together in salon-style gatherings. The
relaxed evening format following the conference will reduce barriers and
allow these great minds to freely exchange ideas. I hope this will help accelerate
the solution of this hundred-year bottleneck between theories relatively
soon.”

“As amazing as our journey of scientific discovery has been over the
past century, we still have so much to learn about how the universe
works on a macro, atomic and subatomic level,” added Paul Lee,
Managing Partner, Vanedge Capital, and Co-Founder, Quantum Gravity
Society. “New experiments and observations capable of advancing work
on this scientific challenge are becoming increasingly possible in
today’s physics labs and using new astronomical tools. The Quantum
Gravity Society looks forward to leveraging that growing technical
capacity with joint theory and experimental work that harnesses the
collective expertise of the world’s great physicists.”

About Quantum Gravity Society

Quantum Gravity Society was founded in Vancouver, Canada in 2020 by a
group of Canadian business, technology and community leaders, and
leading international physicists. The Society’s founding members
include Frank Giustra (Fiore Group), Terry Hui (Concord Pacific), Paul
Lee and Moe Kermani (Vanedge Capital) and Markus Frind (Frind Estate
Winery), along with renowned physicists Abhay Ashtekar, Sir Roger
Penrose, Philip Stamp, Bill Unruh and Birgitta Whaley. For more
information, visit Quantum Gravity Society.

About the Quantum Gravity Conference (Vancouver 2022)


The inaugural Quantum Gravity Conference (August 15-19 [2022]) is presented by
Quantum Gravity Society, Fiore Group, Vanedge Capital, Concord Pacific,
The Westin Bayshore, Vancouver and Frind Estate Winery. For conference
information, visit conference.quantumgravityinstitute.ca. To
register to attend the conference, visit Eventbrite.com.

The front page on the Quantum Gravity Society website is identical to the front page for the Quantum Mechanics & Gravity: Marrying Theory & Experiment conference website. It’s probable that will change with time.

This seems to be an in-person event only.

The site for the conference is in an exceptionally pretty location in Coal Harbour and it’s close to Stanley Park (a major tourist attraction),

The Westin Bayshore, Vancouver
1601 Bayshore Drive
Vancouver, BC V6G 2V4
View map

Assuming that most of my readers will be interested in the ‘public’ day, here’s more from the Wednesday, August 17, 2022 registration page on Eventbrite,

Tickets:

  • Corporate Table of 8 all day access – includes VIP Luncheon: $1,100
  • Ticket per person all day access – includes VIP Luncheon: $129
  • Ticket per person all day access (no VIP luncheon): $59
  • Student / Academia Ticket – all day access (no VIP luncheon): $30

Date:

Wednesday, August 17, 2022 @ 9:00 a.m. – 5:15 p.m. (PT)

Schedule:

  • Registration Opens: 8:00 a.m.
  • Morning Program: 9:00 a.m. – 12:30 p.m.
  • VIP Lunch: 12:30 p.m. – 2:30 p.m.
  • Afternoon Program: 2:30 p.m. – 4:20 p.m.
  • Public Discussion / Debate: 4:20 p.m. – 5:15 p.m.

Program:

9:00 a.m. Session 1: Beginning of the Universe

  • Viatcheslav Mukhanov – Theoretical Physicist and Cosmologist, University of Munich
  • Paul Steinhardt – Theoretical Physicist, Princeton University

Session 2: History of the Universe

  • Jim Peebles, 2019 Nobel Laureate, Princeton University
  • Baron Martin Rees – Cosmologist and Astrophysicist, University of Cambridge
  • Sir Roger Penrose, 2020 Nobel Laureate, University of Oxford (via zoom)

12:30 p.m. VIP Lunch Session: Quantum Gravity — Why Should We Care?

  • Kip Thorne – 2017 Nobel Laureate, Executive Producer of blockbuster film “Interstellar”

2:30 p.m. Session 3: What do Experiments Say?

  • Markus Aspelmeyer – Experimental Physicist, Quantum Optics and Optomechanics Leader, University of Vienna
  • Sir Roger Penrose – 2020 Nobel Laureate (via zoom)

Session 4: Time Travel

  • Kip Thorne – 2017 Nobel Laureate, Executive Producer of blockbuster film “Interstellar”

Event Partners

  • Quantum Gravity Society
  • Westin Bayshore
  • Fiore Group
  • Concord Pacific
  • VanEdge Capital
  • Frind Estate Winery

Marketing Partners

  • BC Business Council
  • Greater Vancouver Board of Trade

Please note that Sir Roger Penrose will be present via Zoom but all the others will be there in the room with you.

Given that Kip Thorne won his 2017 Nobel Prize in Physics (with Rainer Weiss and Barry Barish) for work on gravitational waves, it’s surprising there’s no mention of this in the publicity for a conference on quantum gravity. Finding gravitational waves in 2016 was a very big deal (see Josh Fischman’s and Steve Mirsky’s February 11, 2016 interview with Kip Thorne for Scientific American).

Some thoughts on this conference and the Canadian quantum scene

This conference has a fascinating collection of players. Even I recognized some of the names, e.g., Penrose, Rees, Thorne.

The academics were to be expected and every presenter is an academic, often with their own Wikipedia page. Weirdly, there’s no one from the Perimeter Institute Institute for Theoretical Physics or TRIUMF (a national physics laboratory and centre for particle acceleration) or from anywhere else in Canada, which may be due to their academic specialty rather than an attempt to freeze out Canadian physicists. In any event, the conference academics are largely from the US (a lot of them from CalTech and Stanford) and from the UK.

The business people are a bit of a surprise. The BC Business Council and the Greater Vancouver Board of Trade? Frank Giustra who first made his money with gold mines, then with Lionsgate Entertainment, and who continues to make a great deal of money with his equity investment company, Fiore Group? Terry Hui, Chief Executive Office of Concord Pacific, a real estate development company? VanEdge Capital, an early stage venture capital fund? A winery? Missing from this list is D-Wave Systems, Canada’s quantum calling card and local company. While their area of expertise is quantum computing, I’d still expect to see them present as sponsors.

The academics? These people are not cheap dates (flights, speaker’s fees, a room at the Bayshore, meals). This is a very expensive conference and $129 for lunch and a daypass is likely a heavily subsidized ticket.

Another surprise? No government money/sponsorship. I don’t recall seeing another academic conference held in Canada without any government participation.

Canadian quantum scene

A National Quantum Strategy was first announced in the 2021 Canadian federal budget and reannounced in the 2022 federal budget (see my April 19, 2022 posting for a few more budget details).. Or, you may find this National Quantum Strategy Consultations: What We Heard Report more informative. There’s also a webpage for general information about the National Quantum Strategy.

As evidence of action, the Natural Science and Engineering Research Council of Canada (NSERC) announced new grant programmes made possible by the National Quantum Strategy in a March 15, 2022 news release,

Quantum science and innovation are giving rise to promising advances in communications, computing, materials, sensing, health care, navigation and other key areas. The Government of Canada is committed to helping shape the future of quantum technology by supporting Canada’s quantum sector and establishing leadership in this emerging and transformative domain.

Today [March 15, 2022], the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, is announcing an investment of $137.9 million through the Natural Sciences and Engineering Research Council of Canada’s (NSERC) Collaborative Research and Training Experience (CREATE) grants and Alliance grants. These grants are an important next step in advancing the National Quantum Strategy and will reinforce Canada’s research strengths in quantum science while also helping to develop a talent pipeline to support the growth of a strong quantum community.

Quick facts

Budget 2021 committed $360 million to build the foundation for a National Quantum Strategy, enabling the Government of Canada to build on previous investments in the sector to advance the emerging field of quantum technologies. The quantum sector is key to fuelling Canada’s economy, long-term resilience and growth, especially as technologies mature and more sectors harness quantum capabilities.

Development of quantum technologies offers job opportunities in research and science, software and hardware engineering and development, manufacturing, technical support, sales and marketing, business operations and other fields.

The Government of Canada also invested more than $1 billion in quantum research and science from 2009 to 2020—mainly through competitive granting agency programs, including Natural Sciences and Engineering Research Council of Canada programs and the Canada First Research Excellence Fund—to help establish Canada as a global leader in quantum science.

In addition, the government has invested in bringing new quantum technologies to market, including investments through Canada’s regional development agencies, the Strategic Innovation Fund and the National Research Council of Canada’s Industrial Research Assistance Program.

Bank of Canada, cryptocurrency, and quantum computing

My July 25, 2022 posting features a special project, Note: All emphases are mine,

… (from an April 14, 2022 HKA Marketing Communications news release on EurekAlert),

Multiverse Computing, a global leader in quantum computing solutions for the financial industry and beyond with offices in Toronto and Spain, today announced it has completed a proof-of-concept project with the Bank of Canada through which the parties used quantum computing to simulate the adoption of cryptocurrency as a method of payment by non-financial firms.

“We are proud to be a trusted partner of the first G7 central bank to explore modelling of complex networks and cryptocurrencies through the use of quantum computing,” said Sam Mugel, CTO [Chief Technical Officer] at Multiverse Computing. “The results of the simulation are very intriguing and insightful as stakeholders consider further research in the domain. Thanks to the algorithm we developed together with our partners at the Bank of Canada, we have been able to model a complex system reliably and accurately given the current state of quantum computing capabilities.”

Multiverse Computing conducted its innovative work related to applying quantum computing for modelling complex economic interactions in a research project with the Bank of Canada. The project explored quantum computing technology as a way to simulate complex economic behaviour that is otherwise very difficult to simulate using traditional computational techniques.

By implementing this solution using D-Wave’s annealing quantum computer, the simulation was able to tackle financial networks as large as 8-10 players, with up to 2^90 possible network configurations. Note that classical computing approaches cannot solve large networks of practical relevance as a 15-player network requires as many resources as there are atoms in the universe.

Quantum Technologies and the Council of Canadian Academies (CCA)

In a May 26, 2022 blog posting the CCA announced its Expert Panel on Quantum Technologies (they will be issuing a Quantum Technologies report),

The emergence of quantum technologies will impact all sectors of the Canadian economy, presenting significant opportunities but also risks. At the request of the National Research Council of Canada (NRC) and Innovation, Science and Economic Development Canada (ISED), the Council of Canadian Academies (CCA) has formed an Expert Panel to examine the impacts, opportunities, and challenges quantum technologies present for Canadian industry, governments, and Canadians. Raymond Laflamme, O.C., FRSC, Canada Research Chair in Quantum Information and Professor in the Department of Physics and Astronomy at the University of Waterloo, will serve as Chair of the Expert Panel.

“Quantum technologies have the potential to transform computing, sensing, communications, healthcare, navigation and many other areas,” said Dr. Laflamme. “But a close examination of the risks and vulnerabilities of these technologies is critical, and I look forward to undertaking this crucial work with the panel.”

As Chair, Dr. Laflamme will lead a multidisciplinary group with expertise in quantum technologies, economics, innovation, ethics, and legal and regulatory frameworks. The Panel will answer the following question:

In light of current trends affecting the evolution of quantum technologies, what impacts, opportunities and challenges do these present for Canadian industry, governments and Canadians more broadly?

The Expert Panel on Quantum Technologies:

Raymond Laflamme, O.C., FRSC (Chair), Canada Research Chair in Quantum Information; the Mike and Ophelia Lazaridis John von Neumann Chair in Quantum Information; Professor, Department of Physics and Astronomy, University of Waterloo

Sally Daub, Founder and Managing Partner, Pool Global Partners

Shohini Ghose, Professor, Physics and Computer Science, Wilfrid Laurier University; NSERC Chair for Women in Science and Engineering

Paul Gulyas, Senior Innovation Executive, IBM Canada

Mark W. Johnson, Senior Vice-President, Quantum Technologies and Systems Products, D-Wave Systems

Elham Kashefi, Professor of Quantum Computing, School of Informatics, University of Edinburgh; Directeur de recherche au CNRS, LIP6 Sorbonne Université

Mauritz Kop, Fellow and Visiting Scholar, Stanford Law School, Stanford University

Dominic Martin, Professor, Département d’organisation et de ressources humaines, École des sciences de la gestion, Université du Québec à Montréal

Darius Ornston, Associate Professor, Munk School of Global Affairs and Public Policy, University of Toronto

Barry Sanders, FRSC, Director, Institute for Quantum Science and Technology, University of Calgary

Eric Santor, Advisor to the Governor, Bank of Canada

Christian Sarra-Bournet, Quantum Strategy Director and Executive Director, Institut quantique, Université de Sherbrooke

Stephanie Simmons, Associate Professor, Canada Research Chair in Quantum Nanoelectronics, and CIFAR Quantum Information Science Fellow, Department of Physics, Simon Fraser University

Jacqueline Walsh, Instructor; Director, initio Technology & Innovation Law Clinic, Dalhousie University

You’ll note that both the Bank of Canada and D-Wave Systems are represented on this expert panel.

The CCA Quantum Technologies report (in progress) page can be found here.

Does it mean anything?

Since I only skim the top layer of information (disparagingly described as ‘high level’ by the technology types I used to work with), all I can say is there’s a remarkable level of interest from various groups who are self-organizing. (The interest is international as well. I found the International Society for Quantum Gravity [ISQG], which had its first meeting in 2021.)

I don’t know what the purpose is other than it seems the Canadian focus seems to be on money. The board of trade and business council have no interest in primary research and the federal government’s national quantum strategy is part of Innovation, Science and Economic Development (ISED) Canada’s mandate. You’ll notice ‘science’ is sandwiched between ‘innovation’, which is often code for business, and economic development.

The Bank of Canada’s monetary interests are quite obvious.

The Perimeter Institute mentioned earlier was founded by Mike Lazaridis (from his Wikipedia entry) Note: Links have been removed,

… a Canadian businessman [emphasis mine], investor in quantum computing technologies, and founder of BlackBerry, which created and manufactured the BlackBerry wireless handheld device. With an estimated net worth of US$800 million (as of June 2011), Lazaridis was ranked by Forbes as the 17th wealthiest Canadian and 651st in the world.[4]

In 2000, Lazaridis founded and donated more than $170 million to the Perimeter Institute for Theoretical Physics.[11][12] He and his wife Ophelia founded and donated more than $100 million to the Institute for Quantum Computing at the University of Waterloo in 2002.[8]

That Institute for Quantum Computing? There’s an interesting connection. Raymond Laflamme, the chair for the CCA expert panel, was its director for a number of years and he’s closely affiliated with the Perimeter Institute. (I’m not suggesting anything nefarious or dodgy. It’s a small community in Canada and relationships tend to be tightly interlaced.) I’m surprised he’s not part of the quantum mechanics and gravity conference but that could have something to do with scheduling.

One last interesting bit about Laflamme, from his Wikipedia entry, Note: Links have been removed)

As Stephen Hawking’s PhD student, he first became famous for convincing Hawking that time does not reverse in a contracting universe, along with Don Page. Hawking told the story of how this happened in his famous book A Brief History of Time in the chapter The Arrow of Time.[3] Later on Laflamme made a name for himself in quantum computing and quantum information theory, which is what he is famous for today.

Getting back to the Quantum Mechanics & Gravity: Marrying Theory & Experiment, the public day looks pretty interesting and when is the next time you’ll have a chance to hobnob with all those Nobel Laureates?