Category Archives: wearable electronics

Bioinspired ‘smart’ materials a step towards soft robotics and electronics

An October 13, 2022 news item on Nanowerk describes some new work from the University of Texas at Austin,

Inspired by living things from trees to shellfish, researchers at The University of Texas at Austin set out to create a plastic much like many life forms that are hard and rigid in some places and soft and stretchy in others.

Their success — a first, using only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type — has brought about a new material that is 10 times as tough as natural rubber and could lead to more flexible electronics and robotics.

An October 13, 2022 University of Texas at Austin news release (also on EurekAlert), which originated the news item, delves further into the work,

“This is the first material of its type,” said Zachariah Page, assistant professor of chemistry and corresponding author on the paper. “The ability to control crystallization, and therefore the physical properties of the material, with the application of light is potentially transformative for wearable electronics or actuators in soft robotics.”

Scientists have long sought to mimic the properties of living structures, like skin and muscle, with synthetic materials. In living organisms, structures often combine attributes such as strength and flexibility with ease. When using a mix of different synthetic materials to mimic these attributes, materials often fail, coming apart and ripping at the junctures between different materials.

Oftentimes, when bringing materials together, particularly if they have very different mechanical properties, they want to come apart,” Page said. Page and his team were able to control and change the structure of a plastic-like material, using light to alter how firm or stretchy the material would be.

Chemists started with a monomer, a small molecule that binds with others like it to form the building blocks for larger structures called polymers that were similar to the polymer found in the most commonly used plastic. After testing a dozen catalysts, they found one that, when added to their monomer and shown visible light, resulted in a semicrystalline polymer similar to those found in existing synthetic rubber. A harder and more rigid material was formed in the areas the light touched, while the unlit areas retained their soft, stretchy properties.

Because the substance is made of one material with different properties, it was stronger and could be stretched farther than most mixed materials.

The reaction takes place at room temperature, the monomer and catalyst are commercially available, and researchers used inexpensive blue LEDs as the light source in the experiment. The reaction also takes less than an hour and minimizes use of any hazardous waste, which makes the process rapid, inexpensive, energy efficient and environmentally benign.

The researchers will next seek to develop more objects with the material to continue to test its usability.

“We are looking forward to exploring methods of applying this chemistry towards making 3D objects containing both hard and soft components,” said first author Adrian Rylski, a doctoral student at UT Austin.

The team envisions the material could be used as a flexible foundation to anchor electronic components in medical devices or wearable tech. In robotics, strong and flexible materials are desirable to improve movement and durability.

Here’s a link to and a citation for the paper,

Polymeric multimaterials by photochemical patterning of crystallinity by Adrian K. Rylski, Henry L. Cater, Keldy S. Mason, Marshall J. Allen, Anthony J. Arrowood, Benny D. Freeman, Gabriel E. Sanoja, and Zachariah A. Page. Science 13 Oct 2022 Vol 378, Issue 6616 pp. 211-215 DOI: 10.1126/science.add6975

This paper is behind a paywall.

Enhance or weaken memory with stretchy, bioinspired synaptic transistor

This news is intriguing since they usually want to enhance memory not weaken it. Interestingly, this October 3, 2022 news item on ScienceDaily doesn’t immediately answer why you might want to weaken memory,

Robotics and wearable devices might soon get a little smarter with the addition of a stretchy, wearable synaptic transistor developed by Penn State engineers. The device works like neurons in the brain to send signals to some cells and inhibit others in order to enhance and weaken the devices’ memories.

Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and associate professor of biomedical engineering and of materials science and engineering, the team designed the synaptic transistor to be integrated in robots or wearables and use artificial intelligence to optimize functions. The details were published on Sept. 29 [2022] in Nature Electronics.

“Mirroring the human brain, robots and wearable devices using the synaptic transistor can use its artificial neurons to ‘learn’ and adapt their behaviors,” Yu said. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”

A September 29, 2022 Pennsylvania State University (Penn State) news release (also on EurekAlert but published on October 3, 2022) by Mariah Chuprinski, which originated the news item, explains why you might want to weaken memory,

According to Yu, the artificial neurons in the device were designed to perform like neurons in the ventral tegmental area, a tiny segment of the human brain located in the uppermost part of the brain stem. Neurons process and transmit information by releasing neurotransmitters at their synapses, typically located at the neural cell ends. Excitatory neurotransmitters trigger the activity of other neurons and are associated with enhancing memories, while inhibitory neurotransmitters reduce the activity of other neurons and are associated with weakening memories.

“Unlike all other areas of the brain, neurons in the ventral tegmental area are capable of releasing both excitatory and inhibitory neurotransmitters at the same time,” Yu said. “By designing the synaptic transistor to operate with both synaptic behaviors simultaneously, fewer transistors are needed [emphasis mine] compared to conventional integrated electronics technology, which simplifies the system architecture and allows the device to conserve energy.”

To model soft, stretchy biological tissues, the researchers used stretchable bilayer semiconductor materials to fabricate the device, allowing it to stretch and twist while in use, according to Yu. Conventional transistors, on the other hand, are rigid and will break when deformed.

“The transistor is mechanically deformable and functionally reconfigurable, yet still retains its functions when stretched extensively,” Yu said. “It can attach to a robot or wearable device to serve as their outermost skin.”

In addition to Yu, other contributors include Hyunseok Shim and Shubham Patel, Penn State Department of Engineering Science and Mechanics; Yongcao Zhang, the University of Houston Materials Science and Engineering Program; Faheem Ershad, Penn State Department of Biomedical Engineering and University of Houston Department of Biomedical Engineering; Binghao Wang, School of Electronic Science and Engineering, Southeast University [Note: There’s one in Bangladesh, one in China, and there’s a Southeastern University in Florida, US] and Department of Chemistry and the Materials Research Center, Northwestern University; Zhihua Chen, Flexterra Inc.; Tobin J. Marks, Department of Chemistry and the Materials Research Center, Northwestern University; Antonio Facchetti, Flexterra Inc. and Northwestern University’s Department of Chemistry and Materials Research Center.

Here’s a link to and a citation for the paper,

An elastic and reconfigurable synaptic transistor based on a stretchable bilayer semiconductor by Hyunseok Shim, Faheem Ershad, Shubham Patel, Yongcao Zhang, Binghao Wang, Zhihua Chen, Tobin J. Marks, Antonio Facchetti & Cunjiang Yu. Nature Electronics (2022) DOI: DOI: https://doi.org/10.1038/s41928-022-00836-5 Published: 29 September 2022

This paper is behind a paywall.

Skin-like computing device analyzes health data with brain-mimicking artificial intelligence (a neuromorphic chip)

The wearable neuromorphic chip, made of stretchy semiconductors, can implement artificial intelligence (AI) to process massive amounts of health information in real time. Above, Asst. Prof. Sihong Wang shows a single neuromorphic device with three electrodes. (Photo by John Zich)

Does everything have to be ‘brainy’? Read on for the latest on ‘brainy’ devices.

An August 4, 2022 University of Chicago news release (also on EurekAlert) describes work on a stretchable neuromorphic chip, Note: Links have been removed,

It’s a brainy Band-Aid, a smart watch without the watch, and a leap forward for wearable health technologies. Researchers at the University of Chicago’s Pritzker School of Molecular Engineering (PME) have developed a flexible, stretchable computing chip that processes information by mimicking the human brain. The device, described in the journal Matter, aims to change the way health data is processed.

“With this work we’ve bridged wearable technology with artificial intelligence and machine learning to create a powerful device which can analyze health data right on our own bodies,” said Sihong Wang, a materials scientist and Assistant Professor of Molecular Engineering.

Today, getting an in-depth profile about your health requires a visit to a hospital or clinic. In the future, Wang said, people’s health could be tracked continuously by wearable electronics that can detect disease even before symptoms appear. Unobtrusive, wearable computing devices are one step toward making this vision a reality. 

A Data Deluge
The future of healthcare that Wang—and many others—envision includes wearable biosensors to track complex indicators of health including levels of oxygen, sugar, metabolites and immune molecules in people’s blood. One of the keys to making these sensors feasible is their ability to conform to the skin. As such skin-like wearable biosensors emerge and begin collecting more and more information in real-time, the analysis becomes exponentially more complex. A single piece of data must be put into the broader perspective of a patient’s history and other health parameters.

Today’s smart phones are not capable of the kind of complex analysis required to learn a patient’s baseline health measurements and pick out important signals of disease. However, cutting-edge artificial intelligence platforms that integrate machine learning to identify patterns in extremely complex datasets can do a better job. But sending information from a device to a centralized AI location is not ideal.

“Sending health data wirelessly is slow and presents a number of privacy concerns,” he said. “It is also incredibly energy inefficient; the more data we start collecting, the more energy these transmissions will start using.”

Skin and Brains
Wang’s team set out to design a chip that could collect data from multiple biosensors and draw conclusions about a person’s health using cutting-edge machine learning approaches. Importantly, they wanted it to be wearable on the body and integrate seamlessly with skin.

“With a smart watch, there’s always a gap,” said Wang. “We wanted something that can achieve very intimate contact and accommodate the movement of skin.”

Wang and his colleagues turned to polymers, which can be used to build semiconductors and electrochemical transistors but also have the ability to stretch and bend. They assembled polymers into a device that allowed the artificial-intelligence-based analysis of health data. Rather than work like a typical computer, the chip— called a neuromorphic computing chip—functions more like a human brain, able to both store and analyze data in an integrated way.

Testing the Technology
To test the utility of their new device, Wang’s group used it to analyze electrocardiogram (ECG) data representing the electrical activity of the human heart. They trained the device to classify ECGs into five categories—healthy or four types of abnormal signals. Then, they tested it on new ECGs. Whether or not the chip was stretched or bent, they showed, it could accurately classify the heartbeats.

More work is needed to test the power of the device in deducing patterns of health and disease. But eventually, it could be used either to send patients or clinicians alerts, or to automatically tweak medications.

“If you can get real-time information on blood pressure, for instance, this device could very intelligently make decisions about when to adjust the patient’s blood pressure medication levels,” said Wang. That kind of automatic feedback loop is already used by some implantable insulin pumps, he added.

He already is planning new iterations of the device to both expand the type of devices with which it can integrate and the types of machine learning algorithms it uses.

“Integration of artificial intelligence with wearable electronics is becoming a very active landscape,” said Wang. “This is not finished research, it’s just a starting point.”

Here’s a link to and a citation for the paper,

Intrinsically stretchable neuromorphic devices for on-body processing of health data with artificial intelligence by Shilei Dai, Yahao Dai, Zixuan Zhao, Jie Xu, Jia Huang, Sihong Wang. Matter DOI:https://doi.org/10.1016/j.matt.2022.07.016 Published: August 04, 2022

This paper is behind a paywall.

Implantable living pharmacy

I stumbled across a very interesting US Defense Advanced Research Projects Agency (DARPA) project (from an August 30, 2021 posting on Northwestern University’s Rivnay Lab [a laboratory for organic bioelectronics] blog),

Our lab has received a cooperative agreement with DARPA to develop a wireless, fully implantable ‘living pharmacy’ device that could help regulate human sleep patterns. The project is through DARPA’s BTO (biotechnology office)’s Advanced Acclimation and Protection Tool for Environmental Readiness (ADAPTER) program, meant to address physical challenges of travel, such as jetlag and fatigue.

The device, called NTRAIN (Normalizing Timing of Rhythms Across Internal Networks of Circadian Clocks), would control the body’s circadian clock, reducing the time it takes for a person to recover from disrupted sleep/wake cycles by as much as half the usual time.

The project spans 5 institutions including Northwestern, Rice University, Carnegie Mellon, University of Minnesota, and Blackrock Neurotech.

Prior to the Aug. 30, 2021 posting, Amanda Morris wrote a May 13, 2021 article for Northwestern NOW (university magazine), which provides more details about the project, Note: A link has been removed,

The first phase of the highly interdisciplinary program will focus on developing the implant. The second phase, contingent on the first, will validate the device. If that milestone is met, then researchers will test the device in human trials, as part of the third phase. The full funding corresponds to $33 million over four-and-a-half years. 

Nicknamed the “living pharmacy,” the device could be a powerful tool for military personnel, who frequently travel across multiple time zones, and shift workers including first responders, who vacillate between overnight and daytime shifts.

Combining synthetic biology with bioelectronics, the team will engineer cells to produce the same peptides that the body makes to regulate sleep cycles, precisely adjusting timing and dose with bioelectronic controls. When the engineered cells are exposed to light, they will generate precisely dosed peptide therapies. 

“This control system allows us to deliver a peptide of interest on demand, directly into the bloodstream,” said Northwestern’s Jonathan Rivnay, principal investigator of the project. “No need to carry drugs, no need to inject therapeutics and — depending on how long we can make the device last — no need to refill the device. It’s like an implantable pharmacy on a chip that never runs out.” 

Beyond controlling circadian rhythms, the researchers believe this technology could be modified to release other types of therapies with precise timing and dosing for potentially treating pain and disease. The DARPA program also will help researchers better understand sleep/wake cycles, in general.

“The experiments carried out in these studies will enable new insights into how internal circadian organization is maintained,” said Turek [Fred W. Turek], who co-leads the sleep team with Vitaterna [Martha Hotz Vitaterna]. “These insights will lead to new therapeutic approaches for sleep disorders as well as many other physiological and mental disorders, including those associated with aging where there is often a spontaneous breakdown in temporal organization.” 

For those who like to dig even deeper, Dieynaba Young’s June 17, 2021 article for Smithsonian Magazine (GetPocket.com link to article) provides greater context and greater satisfaction, Note: Links have been removed,

In 1926, Fritz Kahn completed Man as Industrial Palace, the preeminent lithograph in his five-volume publication The Life of Man. The illustration shows a human body bustling with tiny factory workers. They cheerily operate a brain filled with switchboards, circuits and manometers. Below their feet, an ingenious network of pipes, chutes and conveyer belts make up the blood circulatory system. The image epitomizes a central motif in Kahn’s oeuvre: the parallel between human physiology and manufacturing, or the human body as a marvel of engineering.

An apparatus in the embryonic stage of development at the time of this writing in June of 2021—the so-called “implantable living pharmacy”—could have easily originated in Kahn’s fervid imagination. The concept is being developed by the Defense Advanced Research Projects Agency (DARPA) in conjunction with several universities, notably Northwestern and Rice. Researchers envision a miniaturized factory, tucked inside a microchip, that will manufacture pharmaceuticals from inside the body. The drugs will then be delivered to precise targets at the command of a mobile application. …

The implantable living pharmacy, which is still in the “proof of concept” stage of development, is actually envisioned as two separate devices—a microchip implant and an armband. The implant will contain a layer of living synthetic cells, along with a sensor that measures temperature, a short-range wireless transmitter and a photo detector. The cells are sourced from a human donor and reengineered to perform specific functions. They’ll be mass produced in the lab, and slathered onto a layer of tiny LED lights.

The microchip will be set with a unique identification number and encryption key, then implanted under the skin in an outpatient procedure. The chip will be controlled by a battery-powered hub attached to an armband. That hub will receive signals transmitted from a mobile app.

If a soldier wishes to reset their internal clock, they’ll simply grab their phone, log onto the app and enter their upcoming itinerary—say, a flight departing at 5:30 a.m. from Arlington, Virginia, and arriving 16 hours later at Fort Buckner in Okinawa, Japan. Using short-range wireless communications, the hub will receive the signal and activate the LED lights inside the chip. The lights will shine on the synthetic cells, stimulating them to generate two compounds that are naturally produced in the body. The compounds will be released directly into the bloodstream, heading towards targeted locations, such as a tiny, centrally-located structure in the brain called the suprachiasmatic nucleus (SCN) that serves as master pacemaker of the circadian rhythm. Whatever the target location, the flow of biomolecules will alter the natural clock. When the solider arrives in Okinawa, their body will be perfectly in tune with local time.

The synthetic cells will be kept isolated from the host’s immune system by a membrane constructed of novel biomaterials, allowing only nutrients and oxygen in and only the compounds out. Should anything go wrong, they would swallow a pill that would kill the cells inside the chip only, leaving the rest of their body unaffected.

If you have the time, I recommend reading Young’s June 17, 2021 Smithsonian Magazine article (GetPocket.com link to article) in its entirety. Young goes on to discuss, hacking, malware, and ethical/societal issues and more.

There is an animation of Kahn’s original poster in a June 23, 2011 posting on openculture.com (also found on Vimeo; Der Mensch als Industriepalast [Man as Industrial Palace])

Credits: Idea & Animation: Henning M. Lederer / led-r-r.net; Sound-Design: David Indge; and original poster art: Fritz Kahn.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Making longer lasting bandages with sound and bubbles

This research into longer lasting bandages described in an August 12, 2022 news item on phys.org comes from McGill University (Montréal, Canada)

Researchers have discovered that they can control the stickiness of adhesive bandages using ultrasound waves and bubbles. This breakthrough could lead to new advances in medical adhesives, especially in cases where adhesives are difficult to apply such as on wet skin.

“Bandages, glues, and stickers are common bioadhesives that are used at home or in clinics. However, they don’t usually adhere well on wet skin. It’s also challenging to control where they are applied and the strength and duration of the formed adhesion,” says McGill University Professor Jianyu Li, who led the research team of engineers, physicists, chemists, and clinicians.

Caption: Adhesive hydrogel applied on skin under ultrasound probe. Credit: Ran Huo and Jianyu Li

An August 12, 2022 McGill University news release (also on EurekAlert), which originated the news item, delves further into the work,

“We were surprised to find that by simply playing around with ultrasonic intensity, we can control very precisely the stickiness of adhesive bandages on many tissues,” says lead author Zhenwei Ma, a former student of Professor Li and now a Killam Postdoctoral Fellow at the University of British Columbia.

Ultrasound induced bubbles control stickiness

In collaboration with physicists Professor Outi Supponen and Claire Bourquard from the Institute of Fluid Dynamics at ETH Zurich, the team experimented with ultrasound induced microbubbles to make adhesives stickier. “The ultrasound induces many microbubbles, which transiently push the adhesives into the skin for stronger bioadhesion,” says Professor Supponen. “We can even use theoretical modeling to estimate exactly where the adhesion will happen.”

Their study, published in the journal Science, shows that the adhesives are compatible with living tissue in rats. The adhesives can also potentially be used to deliver drugs through the skin. “This paradigm-shifting technology will have great implications in many branches of medicine,” says University of British Columbia Professor Zu-hua Gao. “We’re very excited to translate this technology for applications in clinics for tissue repair, cancer therapy, and precision medicine.”

“By merging mechanics, materials and biomedical engineering, we envision the broad impact of our bioadhesive technology in wearable devices, wound management, and regenerative medicine,” says Professor Li, who is also a Canada Research Chair in Biomaterials and Musculoskeletal Health.

Here’s a link to and a citation for the paper,

Controlled tough bioadhesion mediated by ultrasound by Zhenwei Ma, Claire Bourquard, Qiman Gao, Shuaibing Jiang, Tristan De Iure-Grimmel, Ran Huo, Xuan Li, Zixin He, Zhen Yang, Galen Yang, Yixiang Wang, Edmond Lam, Zu-hua Gao, Outi Supponen and Jianyu Li. Science 11 Aug 2022 Vol 377, Issue 6607 pp. 751-755 DOI: 10.1126/science.abn8699

This paper is behind a paywall.

I haven’t seen this before but it seems that one of the journal’s editors decided to add a standalone paragraph to hype some of the other papers about adhesives in the issue,

A sound way to make it stick

Tissue adhesives play a role in temporary or permanent tissue repair, wound management, and the attachment of wearable electronics. However, it can be challenging to tailor the adhesive strength to ensure reversibility when desired and to maintain permeability. Ma et al. designed hydrogels made of polyacrylamide or poly(N-isopropylacrylamide) combined with alginate that are primed using a solution containing nanoparticles of chitosan, gelatin, or cellulose nanocrystals (see the Perspective by Es Sayed and Kamperman). The application of ultrasound causes cavitation that pushes the primer molecules into the tissue. The mechanical interlocking of the anchors eventually results in strong adhesion between hydrogel and tissue without the need for chemical bonding. Tests on porcine or rat skin showed enhanced adhesion energy and interfacial fatigue resistance with on-demand detachment. —MSL

I like the wordplay and am guessing that MSL is:

Marc S. Lavine
Senior Editor
Education: BASc, University of Toronto; PhD, University of Cambridge
Areas of responsibility: Reviews; materials science, biomaterials, engineering

Electrotactile rendering device virtualizes the sense of touch

I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,

A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.

Here’s what you’ll need to wear for this virtual tactile experience,

Caption: The new wearable tactile rendering system can mimic touch sensations with high spatial resolution and a rapid response rate. Credit: Robotics X Lab and City University of Hong Kong

An October 20, 2022 City University of Hong Kong (CityU) press release (also on EurekAlert), which originated the news item, delves further into the research,

“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”

In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.

Limitations in existing techniques

Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.

The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.

They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.

Tactile stimuli with high spatial resolution

“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr Lin Weikang, a PhD student at CityU, who made and tested the device.

The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.

The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.

Enabling touch in the metaverse

Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.

The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.

“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.

Here’s a link to and a citation for the paper,

Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738

This paper is open access.

Wearable devices for plants

For those with a taste for text, a May 4, 2022 news item on ScienceDaily announces wearable technology for plants,

Plants can’t speak up when they are thirsty. And visual signs, such as shriveling or browning leaves, don’t start until most of their water is gone. To detect water loss earlier, researchers reporting in ACS Applied Materials & Interfaces have created a wearable sensor for plant leaves. The system wirelessly transmits data to a smartphone app, allowing for remote management of drought stress in gardens and crops.

A May 4, 2022 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item,

Newer wearable devices are more than simple step-counters. Some smart watches now monitor the electrical activity of the wearer’s heart with electrodes that sit against the skin. And because many devices can wirelessly share the data that are collected, physicians can monitor and assess their patients’ health from a distance. Similarly, plant-wearable devices could help farmers and gardeners remotely monitor their plants’ health, including leaf water content — the key marker of metabolism and drought stress. Previously, researchers had developed metal electrodes for this purpose, but the electrodes had problems staying attached, which reduced the accuracy of the data. So, Renato Lima and colleagues wanted to identify an electrode design that was reliable for long-term monitoring of plants’ water stress, while also staying put.

The researchers created two types of electrodes: one made of nickel deposited in a narrow, squiggly pattern, and the other cut from partially burnt paper that was coated with a waxy film. When the team affixed both electrodes to detached soybean leaves with clear adhesive tape, the nickel-based electrodes performed better, producing larger signals as the leaves dried out. The metal ones also adhered more strongly in the wind, which was likely because the thin squiggly design of the metallic film allowed more of the tape to connect with the leaf surface. Next, the researchers created a plant-wearable device with the metal electrodes and attached it to a living plant in a greenhouse. The device wirelessly shared data to a smartphone app and website, and a simple, fast machine learning technique successfully converted these data to the percent of water content lost. The researchers say that monitoring water content on leaves can indirectly provide information on exposure to pests and toxic agents. Because the plant-wearable device provides reliable data indoors, they now plan to test the devices in outdoor gardens and crops to determine when plants need to be watered, potentially saving resources and increasing yields.

The authors acknowledge support from the São Paulo Research Foundation and the Brazilian Synchrotron Light Laboratory. Two of the study’s authors are listed on a patent filing application for the technology.

..

Here’s a link to and a citation for the paper,

Biocompatible Wearable Electrodes on Leaves toward the On-Site Monitoring of Water Loss from Plants by Júlia A. Barbosa, Vitoria M. S. Freitas, Lourenço H. B. Vidotto, Gabriel R. Schleder, Ricardo A. G. de Oliveira, Jaqueline F. da Rocha, Lauro T. Kubota, Luis C. S. Vieira, Hélio C. N. Tolentino, Itamar T. Neckel, Angelo L. Gobbi, Murilo Santhiago, and Renato S. Lima. ACS Appl. Mater. Interfaces 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsami.2c02943 Publication Date:March 21, 2022 © 2022 American Chemical Society

This paper is behind a paywall.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Sci-fi opera: R.U.R. A Torrent of Light opens May 28, 2022 in Toronto, Canada

Even though it’s a little late, I guess you could call the opera opening in Toronto on May 28, 2022 a 100th anniversary celebration of the word ‘robot’. Introduced in 1920 by Czech playwright Karel Čapek in his play, R.U.R., which stands for ‘Rossumovi Univerzální Roboti’ or, in English, ‘Rossum’s Universal Robots’, the word was first coined by Čapek’s brother, Josef (see more about the play and the word in the R.U.R. Wikipedia entry).

The opera, R.U.R. A Torrent of Light, is scheduled to open at 8 pm ET on Saturday, May 28, 2022 (after being rescheduled due to a COVID case in the cast) at OCAD University’s (formerly the Ontario College of Art and Design) The Great Hall.

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.

As for the opera’s story,

The fictional tech company R.U.R., founded by couple Helena and Dom, dominates the A.I. software market and powers the now-ubiquitous androids that serve their human owners. 

As Dom becomes more focused on growing R.U.R’s profits, Helena’s creative research leads to an unexpected technological breakthrough that pits the couples’ visions squarely against each other. They’ve reached a turning point for humanity, but is humanity ready? 

Inspired by Karel Čapek’s 1920’s science-fiction play Rossum’s Universal Robots (which introduced the word “robot” to the English language), composer Nicole Lizée’s and writer Nicolas Billon’s R.U.R. A Torrent of Light grapples with one of our generation’s most fascinating questions. [emphasis mine]

So, what is the fascinating question? The answer is here in a March 7, 2022 OCAD news release,

Last Wednesday [March 2, 2022], OCAD U’s Great Hall at 100 McCaul St. was filled with all manner of sound making objects. Drum kits, gongs, chimes, typewriters and most exceptionally, a cello bow that produces bird sounds when glided across any surface were being played while musicians, dancers and opera singers moved among them.  

All were abuzz preparing for Tapestry Opera’s new production, R.U.R. A Torrent of Light, which will be presented this spring in collaboration with OCAD University. 

An immersive, site-specific experience, the new chamber opera explores humanity’s relationship to technology. [emphasis mine] Inspired by Karel Čapek’s 1920s science-fiction play Rossum’s Universal Robots, this latest version is set 20 years in the future when artificial intelligence (AI) has become fully sewn into our everyday lives and is set in the offices of a fictional tech company.

Čapek’s original script brought the word robot into the English language and begins in a factory that manufactures artificial people. Eventually these entities revolt and render humanity extinct.  

The innovative adaptation will be a unique addition to Tapestry Opera’s more than 40-year history of producing operatic stage performances. It is the only company in the country dedicated solely to the creation and performance of original Canadian opera. 

The March 7, 2022 OCAD news release goes on to describe the Social Body Lab’s involvement,

OCAD U’s Social Body Lab, whose mandate is to question the relationship between humans and technology, is helping to bring Tapestry’s vision of the not-so-distant future to the stage. Director of the Lab and Associate Professor in the Faculty of Arts & Science, Kate Hartman, along with Digital Futures Associate Professors Nick Puckett and Dr. Adam Tindale have developed wearable technology prototypes that will be integrated into the performers’ costumes. They have collaborated closely with the opera’s creative team to embrace the possibilities innovative technologies can bring to live performance. 

“This collaboration with Tapestry Opera has been incredibly unique and productive. Working in dialogue with their designers has enabled us to translate their ideas into cutting edge technological objects that we would have never arrived at individually,” notes Professor Puckett. 

The uncanny bow that was being tested last week is one of the futuristic devices that will be featured in the performance and is the invention of Dr. Tindale, who is himself a classically trained musician. He has also developed a set of wearable speakers for R.U.R. A Torrent of Light that when donned by the dancers will allow sound to travel across the stage in step with their choreography. 

Hartman and Puckett, along with the production’s costume, light and sound designers, have developed an LED-based prototype that will be worn around the necks of the actors who play robots and will be activated using WIFI. These collar pieces will function as visual indicators to the audience of various plot points, including the moments when the robots receive software updates.  

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design,” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

“New music and theatre are perfect canvases for iterative experimentation. We look forward to the unique fruits of this collaboration and future ones,” he continues. 

Unfortunately, I cannot find a preview but there is this video highlighting the technology being used in the opera (there are three other videos highlighting the choreography, the music, and the story, respectively, if you scroll about 40% down this page),


As I promised, here are the logistics,

University address:

OCAD University
100 McCaul Street,
Toronto, Ontario, Canada, M5T 1W1

Performance venue:

The Great Hall at OCAD University
Level 2, beside the Anniversary Gallery

Ticket prices:

The following seating sections are available for this performance. Tickets are from $10 to $100. All tickets are subject to a $5 transaction fee.

Orchestra Centre
Orchestra Sides
Orchestra Rear
Balcony (standing room)

Performances:

May 28 at 8:00 pm

May 29 at 4:00 pm

June 01 at 8:00 pm

June 02 at 8:00 pm

June 03 at 8:00 pm

June 04 at 8:00 pm

June 05 at 4:00 pm

Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage offers a link to buy tickets but it lands on a page that doesn’t seem to be functioning properly. I have contacted (as of Tuesday, May 24, 2022 at about 10:30 am PT) the Tapestry Opera folks to let them know about the problem. Hopefully soon, I will be able to update this page when they’ve handled the issue.

ETA May 30, 2022: You can buy tickets here. There are tickets available for only two of the performances left, Thursday, June 2, 2022 at 8 pm and Sunday, June 5, 2022 at 4 pm.