An air curtain shooting down from the brim of a hard hat can prevent 99.8% of aerosols from reaching a worker’s face. The technology, created by University of Michigan startup Taza Aya, potentially offers a new protection option for workers in industries where respiratory disease transmission is a concern.
Independent, third-party testing of Taza Aya’s device showed the effectiveness of the air curtain, curved to encircle the face, coming from nozzles at the hat’s brim. But for the air curtain to effectively protect against pathogens in the room, it must first be cleansed of pathogens itself. Previous research by the group of Taza Aya co-founder Herek Clack, U-M associate professor of civil and environmental engineering, showed that their method can remove and kill 99% of airborne viruses in farm and laboratory settings.
“Our air curtain technology is precisely designed to protect wearers from airborne infectious pathogens, using treated air as a barrier in which any pathogens present have been inactivated so that they are no longer able to infect you if you breathe them in,” Clack said. “It’s virtually unheard of—our level of protection against airborne germs, especially when combined with the improved ergonomics it also provides.”
Fire has been used throughout history for sterilization, and while we might not usually think of it this way, it’s what’s known as a thermal plasma. Nonthermal, or cold, plasmas are made of highly energetic, electrically charged molecules and molecular fragments that achieve a similar effect without the heat. Those ions and molecules stabilize quickly, becoming ordinary air before reaching the curtain nozzles.
Taza Aya’s prototype features a backpack, weighing roughly 10 pounds, that houses the nonthermal plasma module, air handler, electronics and the unit’s battery pack. The handler draws air into the module, where it’s treated before flowing to the air curtain’s nozzle array.
Taza Aya’s progress comes in the wake of the COVID-19 pandemic and in the midst of a summer when the U.S. Centers for Disease Control and Prevention have reported four cases of humans testing positive for bird flu. During the pandemic, agriculture suffered disruptions in meat production due to shortages in labor, which had a direct impact on prices, the availability of some products and the extended supply chain.
In recent months, Taza Aya has conducted user experience testing with workers at Michigan Turkey Producers in Wyoming, Michigan, a processing plant that practices the humane handling of birds. The plant is home to hundreds of workers, many of them coming into direct contact with turkeys during their work day.
To date, paper masks have been the main strategy for protecting employees in such large-scale agriculture productions. But on a noisy production line, where many workers speak English as a second language, masks further reduce the ability of workers to communicate by muffling voices and hiding facial clues.
“During COVID, it was a problem for many plants—the masks were needed, but they prevented good communication with our associates,” said Tina Conklin, Michigan Turkey’s vice president of technical services.
In addition, the effectiveness of masks is reliant on a tight seal over the mouth and noise to ensure proper filtration, which can change minute to minute during a workday. Masks can also fog up safety goggles, and they have to be removed for workers to eat. Taza Aya’s technology avoids all of those problems.
As a researcher at U-M, Clack spent years exploring the use of nonthermal plasma to protect livestock. With the arrival of COVID-19 in early 2020, he quickly pivoted to how the technology might be used for personal protection from airborne pathogens.
In October of that year, Taza Aya was named an awardee in the Invisible Shield QuickFire Challenge—a competition created by Johnson & Johnson Innovation in cooperation with the U.S. Department of Health and Human Services. The program sought to encourage the development of technologies that could protect people from airborne viruses while having a minimal impact on daily life.
“We are pleased with the study results as we embark on this journey,” said Alberto Elli, Taza Aya’s CEO. “This real-world product and user testing experience will help us successfully launch the Worker Wearable [Protection] in 2025.”
There’s a bit more information about the 3rd party testing mentioned at the start of the news release in a June 26, 2024 posting by Herek Clack on the Taza Aya company blog. You find out more about Worker and Individual Wearable Protection on Taza Aya’s The Solution webpage, scroll down abut 55% of the way.
I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)
Ethics, the natural world, social justice, eeek, and AI
Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.
Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.
My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,
In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]
As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)
Social justice
While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.
In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.
From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,
Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …
The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.
…
Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”
…
Eeek
You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,
Project Description
Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.
There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.
‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.
In recovery from an existential crisis (meditations)
There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.
I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.
It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.
It’s worth going more than once to the show as there is so much to experience.
Why did they do that?
Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.
I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.
One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.
By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.
AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.
Where were Ai-Da and Dall-E-2 and the others?
Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor
To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.
Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.
Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),
Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.
Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.
Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.
DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.
As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.
…
A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),
…
“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”
AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.
…
That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.
As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),
Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.
As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.
They have not, in actuality, revealed one secret or solved a single mystery.
What they have done is generate feel-good stories about AI.
…
Take the reports about the Modigliani and Picasso paintings.
These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.
In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.
The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.
…
As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.
Visual culture: seeing into the future
The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.
In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.
Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.
Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’
Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.
Learning about robots, automatons, artificial intelligence, and more
I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.
It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.
Robots, automata, and artificial intelligence
Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,
The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:
The Al-Jazari automatons
The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.
As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.
…
If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.
AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.
*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*
You can’t always get what you want
My friend,
I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.
Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,
I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,
“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”
And, from later in my posting,
“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director.
That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.
The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),
Canada, relative to the world, specializes in subjects generally referred to as the humanities and social sciences (plus health and the environment), and does not specialize as much as others in areas traditionally referred to as the physical sciences and engineering. Specifically, Canada has comparatively high levels of research output in Psychology and Cognitive Sciences, Public Health and Health Services, Philosophy and Theology, Earth and Environmental Sciences, and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies, Engineering, and Mathematics and Statistics. The comparatively low research output in core areas of the natural sciences and engineering is concerning, and could impair the flexibility of Canada’s research base, preventing research institutions and researchers from being able to pivot to tomorrow’s emerging research areas. [p. xix Print; p. 21 PDF]
US-centric
My friend,
I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)
The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)
As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.
I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),
Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.
…
Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]
…
Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.
Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?
You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)
In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].
…
Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?
Playing well with others
it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show
For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.
There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.
In fact, where were the science and technology communities for this show?
On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.
This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.
Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.
In the end
It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.
July 27, 2022, the VAG held a virtual event with an artist,
… Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.
Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,
… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.
Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.
…
It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.
I wonder if this means the end to leaf blowers. That is almost certainly wishful thinking as the researchers don’t seem to be concerned with how the leaves are gathered.
A KAIST [Korea Advanced Institute of Science and Technology] research team has developed graphene-inorganic-hybrid micro-supercapacitors made of fallen leaves using femtosecond laser direct laser writing (Advanced Functional Materials, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses”).
The rapid development of wearable electronics requires breakthrough innovations in flexible energy storage devices in which micro-supercapacitors have drawn a great deal of interest due to their high power density, long lifetimes, and short charging times. Recently, there has been an enormous increase in waste batteries owing to the growing demand and the shortened replacement cycle in consumer electronics. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges.
Forests cover about 30 percent of the Earth’s surface and produce a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is completely biodegradable, which makes it an attractive sustainable resource. Nevertheless, if the fallen leaves are left neglected instead of being used efficiently, they can contribute to fire hazards, air pollution, and global warming.
To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a novel technology that can create 3D porous graphene microelectrodes with high electrical conductivity by irradiating femtosecond laser pulses on the leaves in ambient air. This one-step fabrication does not require any additional materials or pre-treatment.
They showed that this technique could quickly and easily produce porous graphene electrodes at a low price, and demonstrated potential applications by fabricating graphene micro-supercapacitors to power an LED and an electronic watch. These results open up a new possibility for the mass production of flexible and green graphene-based electronic devices.
Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.”
This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research.
This is the first time I’ve seen wearable tech based on biological material, in this case, fungi. In diving further into this material (wordplay intended), I discovered some previous work on using fungi for building materials, which you’ll find later in this posting.
Fungi are among the world’s oldest and most tenacious organisms. They are now showing great promise to become one of the most useful materials for producing textiles, gadgets and other construction materials. The joint research venture undertaken by the University of the West of England, Bristol, the U.K. (UWE Bristol) and collaborators from Mogu S.r.l., Italy, Istituto Italiano di Tecnologia, Torino, Italy and the Faculty of Computer Science, Multimedia and Telecommunications of the Universitat Oberta de Catalunya (UOC) has demonstrated that fungi possess incredible properties that allow them to sense and process a range of external stimuli, such as light, stretching, temperature, the presence of chemical substances and even electrical signals. [emphasis mine]
This could help pave the way for the emergence of new fungal materials with a host of interesting traits, including sustainability, durability, repairability and adaptability. Through exploring the potential of fungi as components in wearable devices, the study has verified the possibility of using these biomaterials as efficient sensors with endless possible applications.
People are unlikely to think of fungi as a suitable material for producing gadgets, especially smart devices such as pedometers or mobile phones. Wearable devices require sophisticated circuits that connect to sensors and have at least some computing power, which is accomplished through complex procedures and special materials. This, roughly speaking, is what makes them “smart”. The collaboration of Prof. Andrew Adamatzky and Dr. Anna Nikolaidou from UWE Bristol’s Unconventional Computing Laboratory, Antoni Gandia, Chief Technology Officer at Mogu S.r.l., Prof. Alessandro Chiolerio from Istituto Italiano di Tecnologia, Torino, Italy and Dr. Mohammad Mahdi Dehshibi, researcher with the UOC’s Scene Understanding and Artificial Intelligence Lab (SUNAI) have demonstrated that fungi can be added to the list of these materials.
Indeed, the recent study, entitled “Reactive fungal wearable” and featured in Biosystems, analyses the ability of oyster fungus Pleurotus ostreatus to sense environmental stimuli that could come, for example, from the human body. In order to test the fungus’s response capabilities as a biomaterial, the study analyses and describes its role as a biosensor with the ability to discern between chemical, mechanical and electrical stimuli.
“Fungi make up the largest, most widely distributed and oldest group of living organisms on the planet,” said Dehshibi, who added, “They grow extremely fast and bind to the substrate you combine them with”. According to the UOC researcher, fungi are even able to process information in a way that resembles computers.
“We can reprogramme a geometry and graph-theoretical structure of the mycelium networks and then use the fungi’s electrical activity to realize computing circuits,” said Dehshibi, adding that, “Fungi do not only respond to stimuli and trigger signals accordingly, but also allow us to manipulate them to carry out computational tasks, in other words, to process information”. As a result, the possibility of creating real computer components with fungal material is no longer pure science fiction. In fact, these components would be capable of capturing and reacting to external signals in a way that has never been seen before.
Why use fungi?
These fungi have less to do with diseases and other issues caused by their kin when grown indoors. What’s more, according to Dehshibi, mycelium-based products are already used commercially in construction. He said: “You can mould them into different shapes like you would with cement, but to develop a geometric space you only need between five days and two weeks. They also have a small ecological footprint. In fact, given that they feed on waste to grow, they can be considered environmentally friendly”.
The world is no stranger to so-called “fungal architectures” [emphasis mine], built using biomaterials made from fungi. Existing strategies in this field involve growing the organism into the desired shape using small modules such as bricks, blocks or sheets. These are then dried to kill off the organism, leaving behind a sustainable and odourless compound.
But this can be taken one step further, said the expert, if the mycelia are kept alive and integrated into nanoparticles and polymers to develop electronic components. He said: “This computer substrate is grown in a textile mould to give it shape and provide additional structure. Over the last decade, Professor Adamatzky has produced several prototypes of sensing and computing devices using the slime mould Physarum polycephalum, including various computational geometry processors and hybrid electronic devices.”
The upcoming stretch
Although Professor Adamatzky found that this slime mould is a convenient substrate for unconventional computing, the fact that it is continuously changing prevents the manufacture of long-living devices, and slime mould computing devices are thus confined to experimental laboratory set-ups.
However, according to Dehshibi, thanks to their development and behaviour, basidiomycetes are more readily available, less susceptible to infections, larger in size and more convenient to manipulate than slime mould. In addition, Pleurotus ostreatus, as verified in their most recent paper, can be easily experimented on outdoors, thus opening up the possibility for new applications. This makes fungi an ideal target for the creation of future living computer devices.
The UOC researcher said: “In my opinion, we still have to address two major challenges. The first consists in really implementing [fungal system] computation with a purpose; in other words, computation that makes sense. The second would be to characterize the properties of the fungal substrates via Boolean mapping, in order to uncover the true computing potential of the mycelium networks.” To word it another way, although we know that there is potential for this type of application, we still have to figure out how far this potential goes and how we can tap into it for practical purposes.
We may not have to wait too long for the answers, though. The initial prototype developed by the team, which forms part of the study, will streamline the future design and construction of buildings with unique capabilities, thanks to their fungal biomaterials. The researcher said: “This innovative approach promotes the use of a living organism as a building material that is also fashioned to compute.” When the project wraps up in December 2022, the FUNGAR project will construct a large-scale fungal building in Denmark and Italy, as well as a smaller version on UWE Bristol’s Frenchay Campus.
Dehshibi said: “To date, only small modules such as bricks and sheets have been manufactured. However, NASA [US National Aeronautics Space Administration] is also interested in the idea and is looking for ways to build bases on the Moon and Mars to send inactive spores to other planets.” To conclude, he said: “Living inside a fungus may strike you as odd, but why is it so strange to think that we could live inside something living? It would mark a very interesting ecological shift that would allow us to do away with concrete, glass and wood. Just imagine schools, offices and hospitals that continuously grow, regenerate and die; it’s the pinnacle of sustainable life.”
For the Authors of the paper, the point of fungal computers is not to replace silicon chips. Fungal reactions are too slow for that. Rather, they think humans could use mycelium growing in an ecosystem as a “large-scale environmental sensor.” Fungal networks, they reason, are monitoring a large number of data streams as part of their everyday existence. If we could plug into mycelial networks and interpret the signals, they use to process information, we could learn more about what was happening in an ecosystem.
Here’s a link to and a citation for the paper,
Reactive fungal wearable by Andrew Adamatzky, Anna Nikolaidou, Antoni Gandia, Alessandro Chiolerio, Mohammad Mahdi Dehshibi. Biosystems Volume 199, January 2021, 104304 DOI: https://doi.org/10.1016/j.biosystems.2020.104304
This paper is behind a paywall.
Fungal architecture and building materials
Here’s a video, which shows the work which inspired the fungal architecture that Dr. Dehshibi mentioned in the press release about wearable tech,
The video shows a 2014 Hy-Fi installation by The Living for MoMA (Museum of Modern Art) PS1 in New York City. Here’s more about HyFi and what it inspired from a January 15, 2021 article by Caleb Davies for the EU (European Union) Research and Innovation Magazine and republished on phys.org (Note: Links have been removed),
In the summer of 2014 a strange building began to take shape just outside MoMA PS1, a contemporary art centre in New York City. It looked like someone had started building an igloo and then got carried away, so that the ice-white bricks rose into huge towers. It was a captivating sight, but the truly impressive thing about this building was not so much its looks but the fact that it had been grown.
The installation, called Hy-Fi, was designed and created by The Living, an architectural design studio in New York. Each of the 10,000 bricks had been made by packing agricultural waste and mycelium, the fungus that makes mushrooms, into a mould and letting them grow into a solid mass.
This mushroom monument gave architectural researcher Phil Ayres an idea. “It was impressive,” said Ayres, who is based at the Centre for Information Technology and Architecture in Copenhagen, Denmark. But this project and others like it were using fungus as a component in buildings such as bricks without necessarily thinking about what new types of building we could make from fungi.
That’s why he and three colleagues have begun the FUNGAR project—to explore what kinds of new buildings we might construct out of mushrooms.
First, there was a dress that reflected your emotions. Now, apparently, there’s a dress that reflects your thoughts. Frankly, I don’t understand why anyone would want clothing that performed either function. However, I’m sure there’s an extrovert out there who’s equally puzzled abut my take on this matter.
Emotion-reading dress
Before getting to this latest piece of wearable technology, the mind-reading dress, you might find this emotional sensing dress not only interesting but eerily similar,
Philips Design has developed a series of dynamic garments as part of the ongoing SKIN exploration research into the area known as emotional sensing. The garments, which are intended for demonstration purposes only, demonstrate how electronics can be incorporated into fabrics and garments in order to express the emotions and personality of the wearer. The marvelously intricate wearable prototypes include Bubelle, a dress surrounded by a delicate bubble illuminated by patterns that changed dependent on skin contact- and Frison, a body suit that reacts to being blown on by igniting a private constellation of tiny LEDs. Sensitive rather than intelligent These garments were developed as part of the SKIN research project, which challenges the notion that our lives are automatically better because they are more digital. It looks at more analog phenomena like emotional sensing and explores technologies that are sensitive rather than intelligent. SKIN belongs to the ongoing, far-future research program carried out at Philips Design. The aim of this program is to identify emerging trends and likely societal shifts and then carry out probes that explore whether there is potential for Philips in some of the more promising areas. Rethinking our interaction with products and content According to Clive van Heerden, Senior Director of design-led innovation at Philips Design, the SKIN probe has a much wider context than just garments. As our media becomes progressively more virtual, it is quite possible in long term future that we will no longer have objects like DVD players, or music contained on disks, or books that are actually printed. An opportunity is therefore emerging for us to completely rethink our interaction with products and content. More info: http://www.design.philips.com/about/d…
I first heard about the dress at the 2009 International Symposium of Electronic Arts (2009 ISEA held in Belfast, Norther Ireland and Dublin, Ireland). Clive van Heerden who was then working for Philips Design (it’s part of a Dutch multinational originally known widely for its Philips light bulbs and called Royal Philips Electronics) opened vHM Design Futures in 2011 with Jack Mama in London (UK). Should you be curious as to how the project is featured on vHM, check out 2006 SKIN: DRESSES.
Mind-reading dress
Moving on from emotion-sensing clothes to mind-reading clothes,
Mark Wilson’s August 31, 2020 article for Fast Company reflects a sanguine approach to clothing that broadcasts your ‘thoughts’ (Note: Links have been removed),
…
… what if your clothing were a direct reflection on yourself? What if it could literally visualize what you were thinking? That’s the idea of the Pangolin Scales Project, a new brain-reading dress by Dutch fashion designer Anouk Wipprecht [of Anouk Wipprecht FashionTech], with support from the Institute for Integrated Circuits at JKU [Johannes Kepler University Linz] and G.tec medical engineering.
…
… A total of 1,024 brain-reading EEG sensors are placed on someone’s head to measure the electrical activity inside their brain. These sensors have a faceted design that resembles the keratin scales of a pangolin.
… It’s not a message that you can understand just by looking at it. You won’t suddenly know if someone is hungry or thinking of their favorite book just because they’re wearing this dress. But it’s still a captivating visualization of the innermost working of someone’s mind, as well as a proof point: Maybe one day, you really will be able to judge a book by its cover, because that cover will say it all.
Whether you consider the projects to be analog or digital, they raise interesting questions about privacy.
It’s not ready for the COVID-19 pandemic but if I understand it properly, wearing this clothing will be a little like wearing a thermometer and that could be very useful. A March 4, 2020 news item on Nanowerk announces the research (Note: A link has been removed),
Researchers have reported a new material, pliable enough to be woven into fabric but imbued with sensing capabilities that can serve as an early warning system for injury or illness.
The material, described in a paper published by ACS Applied Nano Materials (“Poly(octadecyl acrylate)-Grafted Multiwalled Carbon Nanotube Composites for Wearable Temperature Sensors”), involves the use of carbon nanotubes and is capable of sensing slight changes in body temperature while maintaining a pliable disordered structure – as opposed to a rigid crystalline structure – making it a good candidate for reusable or disposable wearable human body temperature sensors. Changes in body heat change the electrical resistance, alerting someone monitoring that change to the potential need for intervention.
I think this is an artistic rendering of the research,
“Your body can tell you something is wrong before it becomes obvious,” said Seamus Curran, a physics professor at the University of Houston and co-author on the paper. Possible applications range from detecting dehydration in an ultra-marathoner to the beginnings of a pressure sore in a nursing home patient.
The researchers said it is also cost-effective because the raw materials required are used in relatively low concentrations.
The discovery builds on work Curran and fellow researchers Kang-Shyang Liao and Alexander J. Wang began nearly a decade ago, when they developed a hydrophobic nanocoating for cloth, which they envisioned as a protective coating for clothing, carpeting and other fiber-based materials.
Wang is now a Ph.D. student at Technological University Dublin, currently working with Curran at UH, and is corresponding author for the paper. In addition to Curran and Liao, other researchers involved include Surendra Maharjan, Brian P. McElhenny, Ram Neupane, Zhuan Zhu, Shuo Chen, Oomman K. Varghese and Jiming Bao, all of UH; Kourtney D. Wright and Andrew R. Barron of Rice University, and Eoghan P. Dillon of Analysis Instruments in Santa Barbara.
The material, created using poly(octadecyl acrylate)-grafted multiwalled carbon nanotubes, is technically known as a nanocarbon-based disordered, conductive, polymeric nanocomposite, or DCPN, a class of materials increasingly used in materials science. But most DCPN materials are poor electroconductors, making them unsuitable for use in wearable technologies that require the material to detect slight changes in temperature.
The new material was produced using a technique called RAFT-polymerization, Wang said, a critical step that allows the attached polymer to be electronically and phononically coupled with the multiwalled carbon nanotube through covalent bonding. As such, subtle structural arrangements associated with the glass transition temperature of the system are electronically amplified to produce the exceptionally large electronic responses reported in the paper, without the negatives associated with solid-liquid phase transitions. The subtle structural changes associated with glass transition processes are ordinarily too small to produce large enough electronic responses.
This waffled, greyish thing may not look like much but scientists are hopeful that it can be useful as a health sensor in athletic shoes and elsewhere. A March 6, 2020 news item on Nanowerk describes the work in more detail (Note: Links have been removed),
Researchers have utilized 3D printing and nanotechnology to create a durable, flexible sensor for wearable devices to monitor everything from vital signs to athletic performance (ACS Nano, “3D-Printed Ultra-Robust Surface-Doped Porous Silicone Sensors for Wearable Biomonitoring”).
The new technology, developed by engineers at the University of Waterloo [Ontario, Canada], combines silicone rubber with ultra-thin layers of graphene in a material ideal for making wristbands or insoles in running shoes.
When that rubber material bends or moves, electrical signals are created by the highly conductive, nanoscale graphene embedded within its engineered honeycomb structure.
“Silicone gives us the flexibility and durability required for biomonitoring applications, and the added, embedded graphene makes it an effective sensor,” said Ehsan Toyserkani, research director at the Multi-Scale Additive Manufacturing (MSAM) Lab at Waterloo. “It’s all together in a single part.”
Fabricating a silicone rubber structure with such complex internal features is only possible using state-of-the-art 3D printing – also known as additive manufacturing – equipment and processes.
The rubber-graphene material is extremely flexible and durable in addition to highly conductive.
“It can be used in the harshest environments, in extreme temperatures and humidity,” said Elham Davoodi, an engineering PhD student at Waterloo who led the project. “It could even withstand being washed with your laundry.”
The material and the 3D printing process enable custom-made devices to precisely fit the body shapes of users, while also improving comfort compared to existing wearable devices and reducing manufacturing costs due to simplicity.
Toyserkani, a professor of mechanical and mechatronics engineering, said the rubber-graphene sensor can be paired with electronic components to make wearable devices that record heart and breathing rates, register the forces exerted when athletes run, allow doctors to remotely monitor patients and numerous other potential applications.
Researchers from the University of California, Los Angeles and the University of British Columbia collaborated on the project.
There’s been a lot of talk about wearable electronics, specifically e-textiles, but nothing seems to have entered the marketplace. Scaling up your lab discoveries for industrial production can be quite problematic. From an October 10, 2019 news item on ScienceDaily,
Producing functional fabrics that perform all the functions we want, while retaining the characteristics of fabric we’re accustomed to is no easy task.
Two groups of researchers at Drexel University — one, who is leading the development of industrial functional fabric production techniques, and the other, a pioneer in the study and application of one of the strongest, most electrically conductive super materials in use today — believe they have a solution.
They’ve improved a basic element of textiles: yarn. By adding technical capabilities to the fibers that give textiles their character, fit and feel, the team has shown that it can knit new functionality into fabrics without limiting their wearability.
In a paper recently published in the journal Advanced Functional Materials, the researchers, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, and Genevieve Dion, an associate professor in Westphal College of Media Arts & Design and director of Drexel’s Center for Functional Fabrics, showed that they can create a highly conductive, durable yarn by coating standard cellulose-based yarns with a type of conductive two-dimensional material called MXene.
Hitting snags
“Current wearables utilize conventional batteries, which are bulky and uncomfortable, and can impose design limitations to the final product,” they write. “Therefore, the development of flexible, electrochemically and electromechanically active yarns, which can be engineered and knitted into full fabrics provide new and practical insights for the scalable production of textile-based devices.”
The team reported that its conductive yarn packs more conductive material into the fibers and can be knitted by a standard industrial knitting machine to produce a textile with top-notch electrical performance capabilities. This combination of ability and durability stands apart from the rest of the functional fabric field today.
Most attempts to turn textiles into wearable technology use stiff metallic fibers that alter the texture and physical behavior of the fabric. Other attempts to make conductive textiles using silver nanoparticles and graphene and other carbon materials raise environmental concerns and come up short on performance requirements. And the coating methods that are successfully able to apply enough material to a textile substrate to make it highly conductive also tend to make the yarns and fabrics too brittle to withstand normal wear and tear.
“Some of the biggest challenges in our field are developing innovative functional yarns at scale that are robust enough to be integrated into the textile manufacturing process and withstand washing,” Dion said. “We believe that demonstrating the manufacturability of any new conductive yarn during experimental stages is crucial. High electrical conductivity and electrochemical performance are important, but so are conductive yarns that can be produced by a simple and scalable process with suitable mechanical properties for textile integration. All must be taken into consideration for the successful development of the next-generation devices that can be worn like everyday garments.”
The winning combination
Dion has been a pioneer in the field of wearable technology, by drawing on her background on fashion and industrial design to produce new processes for creating fabrics with new technological capabilities. Her work has been recognized by the Department of Defense, which included Drexel, and Dion, in its Advanced Functional Fabrics of America effort to make the country a leader in the field.
She teamed with Gogotsi, who is a leading researcher in the area of two-dimensional conductive materials, to approach the challenge of making a conductive yarn that would hold up to knitting, wearing and washing.
Gogotsi’s group was part of the Drexel team that discovered highly conductive two-dimensional materials, called MXenes, in 2011 and have been exploring their exceptional properties and applications for them ever since. His group has shown that it can synthesize MXenes that mix with water to create inks and spray coatings without any additives or surfactants – a revelation that made them a natural candidate for making conductive yarn that could be used in functional fabrics. [Gogotsi’s work was featured here in a May 6, 2019 posting]
“Researchers have explored adding graphene and carbon nanotube coatings to yarn, our group has also looked at a number of carbon coatings in the past,” Gogotsi said. “But achieving the level of conductivity that we demonstrate with MXenes has not been possible until now. It is approaching the conductivity of silver nanowire-coated yarns, but the use of silver in the textile industry is severely limited due to its dissolution and harmful effect on the environment. Moreover, MXenes could be used to add electrical energy storage capability, sensing, electromagnetic interference shielding and many other useful properties to textiles.”
In its basic form, titanium carbide MXene looks like a black powder. But it is actually composed of flakes that are just a few atoms thick, which can be produced at various sizes. Larger flakes mean more surface area and greater conductivity, so the team found that it was possible to boost the performance of the yarn by infiltrating the individual fibers with smaller flakes and then coating the yarn itself with a layer of larger-flake MXene.
Putting it to the test
The team created the conductive yarns from three common, cellulose-based yarns: cotton, bamboo and linen. They applied the MXene material via dip-coating, which is a standard dyeing method, before testing them by knitting full fabrics on an industrial knitting machine – the kind used to make most of the sweaters and scarves you’ll see this fall.
Each type of yarn was knit into three different fabric swatches using three different stitch patterns – single jersey, half gauge and interlock – to ensure that they are durable enough to hold up in any textile from a tightly knit sweater to a loose-knit scarf.
“The ability to knit MXene-coated cellulose-based yarns with different stitch patterns allowed us to control the fabric properties, such as porosity and thickness for various applications,” the researchers write.
To put the new threads to the test in a technological application, the team knitted some touch-sensitive textiles – the sort that are being explored by Levi’s and Yves Saint Laurent as part of Google’s Project Jacquard.
Not only did the MXene-based conductive yarns hold up against the wear and tear of the industrial knitting machines, but the fabrics produced survived a battery of tests to prove its durability. Tugging, twisting, bending and – most importantly – washing, did not diminish the touch-sensing abilities of the yarn, the team reported – even after dozens of trips through the spin cycle.
Pushing forward
But the researchers suggest that the ultimate advantage of using MXene-coated conductive yarns to produce these special textiles is that all of the functionality can be seamlessly integrated into the textiles. So instead of having to add an external battery to power the wearable device, or wirelessly connect it to your smartphone, these energy storage devices and antennas would be made of fabric as well – an integration that, though literally seamed, is a much smoother way to incorporate the technology.
“Electrically conducting yarns are quintessential for wearable applications because they can be engineered to perform specific functions in a wide array of technologies,” they write.
Using conductive yarns also means that a wider variety of technological customization and innovations are possible via the knitting process. For example, “the performance of the knitted pressure sensor can be further improved in the future by changing the yarn type, stitch pattern, active material loading and the dielectric layer to result in higher capacitance changes,” according to the authors.
Dion’s team at the Center for Functional Fabrics is already putting this development to the test in a number of projects, including a collaboration with textile manufacturer Apex Mills – one of the leading producers of material for car seats and interiors. And Gogotsi suggests the next step for this work will be tuning the coating process to add just the right amount of conductive MXene material to the yarn for specific uses.
“With this MXene yarn, so many applications are possible,” Gogotsi said. “You can think about making car seats with it so the car knows the size and weight of the passenger to optimize safety settings; textile pressure sensors could be in sports apparel to monitor performance, or woven into carpets to help connected houses discern how many people are home – your imagination is the limit.”
Researchers have produced a video about their work,
Here’s a link to and a citation for the paper,
Knittable and Washable Multifunctional MXene‐Coated Cellulose Yarns by Simge Uzun, Shayan Seyedin, Amy L. Stoltzfus, Ariana S. Levitt, Mohamed Alhabeb, Mark Anayee, Christina J. Strobel, Joselito M. Razal, Genevieve Dion, Yury Gogotsi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201905015 First published: 05 September 2019
A team of scientists are seeking to kick-start a wearable technology revolution by creating flexible fibres and adding acids from red wine.
Extracting tannic acid from red wine, coffee or black tea, led a team of scientists from The University of Manchester to develop much more durable and flexible wearable devices. The addition of tannins improved mechanical properties of materials such as cotton to develop wearable sensors for rehabilitation monitoring, drastically increasing the devices lifespan.
The team have developed wearable devices such as capacitive breath sensors and artificial hands for extreme conditions by improving the durability of flexible sensors. Previously, wearable technology has been subject to fail after repeated bending and folding which can interrupt the conductivity of such devices due to tiny micro cracks. Improving this could open the door to more long-lasting integrated technology.
Dr Xuqing Liu who led the research team said: “We are using this method to develop new flexible, breathable, wearable devices. The main research objective of our group is to develop comfortable wearable devices for flexible human-machine interface.
“Traditional conductive material suffers from weak bonding to the fibers which can result in low conductivity. When red wine, or coffee, or black tea, is spilled on a dress, it’s difficult to get rid of these stains. The main reason is that they all contain tannic acid, which can firmly adsorb the material on the surface of the fiber. This good adhesion is exactly what we need for durable wearable, conductive devices.”
The new research published in the journal Small demonstrated that without this layer of tannic acid, the conductivity is several hundred times, or even thousands of times, less than traditional conductive material samples as the conductive coating becomes easily detached from the textile surface through repeated bending and flexing.
It seems wearable electronic textiles may be getting nearer to the marketplace. I have three research items (two teams working with graphene and one working with carbon nanotubes) that appeared on my various feeds within two days of each other.
UK/China
This research study is the result of a collaboration between UK and Chinese scientists. From a May 15, 2019 news item on phys.org (Note: Links have been removed),
Wearable electronic components incorporated directly into fabrics have been developed by researchers at the University of Cambridge. The devices could be used for flexible circuits, healthcare monitoring, energy conversion, and other applications.
The Cambridge researchers, working in collaboration with colleagues at Jiangnan University in China, have shown how graphene – a two-dimensional form of carbon – and other related materials can be directly incorporated into fabrics to produce charge storage elements such as capacitors, paving the way to textile-based power supplies which are washable, flexible and comfortable to wear.
The research, published in the journal Nanoscale, demonstrates that graphene inks can be used in textiles able to store electrical charge and release it when required. The new textile electronic devices are based on low-cost, sustainable and scalable dyeing of polyester fabric. The inks are produced by standard solution processing techniques.
Building on previous work by the same team, the researchers designed inks which can be directly coated onto a polyester fabric in a simple dyeing process. The versatility of the process allows various types of electronic components to be incorporated into the fabric.
Most other wearable electronics rely on rigid electronic components mounted on plastic or textiles. These offer limited compatibility with the skin in many circumstances, are damaged when washed and are uncomfortable to wear because they are not breathable.
“Other techniques to incorporate electronic components directly into textiles are expensive to produce and usually require toxic solvents, which makes them unsuitable to be worn,” said Dr Felice Torrisi from the Cambridge Graphene Centre, and the paper’s corresponding author. “Our inks are cheap, safe and environmentally-friendly, and can be combined to create electronic circuits by simply overlaying different fabrics made of two-dimensional materials on the fabric.”
The researchers suspended individual graphene sheets in a low boiling point solvent, which is easily removed after deposition on the fabric, resulting in a thin and uniform conducting network made up of multiple graphene sheets. The subsequent overlay of several graphene and hexagonal boron nitride (h-BN) fabrics creates an active region, which enables charge storage. This sort of ‘battery’ on fabric is bendable and can withstand washing cycles in a normal washing machine.
“Textile dyeing has been around for centuries using simple pigments, but our result demonstrates for the first time that inks based on graphene and related materials can be used to produce textiles that could store and release energy,” said co-author Professor Chaoxia Wang from Jiangnan University in China. “Our process is scalable and there are no fundamental obstacles to the technological development of wearable electronic devices both in terms of their complexity and performance.”
The work done by the Cambridge researchers opens a number of commercial opportunities for ink based on two-dimensional materials, ranging from personal health and well-being technology, to wearable energy and data storage, military garments, wearable computing and fashion.
“Turning textiles into functional energy storage elements can open up an entirely new set of applications, from body-energy harvesting and storage to the Internet of Things,” said Torrisi “In the future our clothes could incorporate these textile-based charge storage elements and power wearable textile devices.”
Prior to graphene’s reign as the ‘it’ carbon material, carbon nanotubes (CNTs) ruled. It’s been quieter on the CNT front since graphene took over but a May 15, 2019 Nanowerk Spotlight article by Michael Berger highlights some of the latest CNT research coming out of India,
…
The most important technical challenge is to blend the chemical nature of raw materials with fabrication techniques and processability, all of which are diametrically conflicting for textiles and conventional energy storage devices. A team from Indian Institute of Technology Bombay has come out with a comprehensive approach involving simple and facile steps to fabricate a wearable energy storage device. Several scientific and technological challenges were overcome during this process.
First, to achieve user-comfort and computability with clothing, the scaffold employed was the the same as what a regular fabric is made up of – cellulose fibers. However, cotton yarns are electrical insulators and therefore practically useless for any electronics. Therefore, the yarns are coated with single-wall carbon nanotubes (SWNTs).
SWNTs are hollow, cylindrical allotropes of carbon and combine excellent mechanical strength with electrical conductivity and surface area. Such a coating converts the electrical insulating cotton yarn to a metallic conductor with high specific surface area. At the same time, using carbon-based materials ensures that the final material remains light-weight and does not cause user discomfort that can arise from metallic wires such as copper and gold. This CNT-coated cotton yarn (CNT-wires) forms the electrode for the energy storage device.
Next, the electrolyte is composed of solid-state electrolyte sheets since no liquid-state electrolytes can be used for this purpose. However, solid state electrolytes suffer from poor ionic conductivity – a major disadvantage for energy storage applications. Therefore, a steam-based infiltration approach that enhances the ionic conductivity of the electrolyte is adopted. Such enhancement of humidity significantly increases the energy storage capacity of the device.
…
The integration of the CNT-wire electrode with the electrolyte sheet was carried out by a simple and elegant approach of interweaving the CNT-wire through the electrolyte (see Figure 1). This resulted in cross-intersections which are actually junctions where the electrical energy can be stored. Each such junction is now an energy storage unit, referred to as sewcap.
The advantage of this process is that several 100s and 1000s of sewcaps can be made in a small area and integrated to increase the total amount of energy stored in the system. This scalability is unique and critical aspect of this work and stems from the approach of interweaving.
Further, this process is completely adaptable with current processes used in textile industries. Hence, a proportionately large energy-storage is achieved by creating sewcap-junctions in various combinations.
All components of the final sewcap device are flexible. However, they need to be protected from environmental effects such as temperature, humidity and sweat while retaining the mechanical flexibility. This is achieved by laminating the entire device between polymer sheets. The process is exactly similar to the one used for protecting documents and ID cards.
The laminated sewcap can be integrated easily on clothing and fabrics while retaining the flexibility and sturdiness. This is demonstrated by the unchanged performance of the device during extreme and harsh mechanical testing such as striking repeatedly with a hammer, complete flexing, bending and rolling and washing in a laundry machine.
In fact, this is the first device that has been proven to be stable under rigorous washing conditions in the presence of hot water, detergents and high torque (spinning action of washing machine). This provides the device with comprehensive mechanical stability.
…
CNTs have high surface area and electrical conductivity. The CNT-wire combines these properties of CNTs with stability and porosity of cellulose yarns. The junction created by interweaving is essentially comprised of two such CNT-wires that are sandwiching an electrolyte. Application of potential difference leads to polarization of the electrolyte thus enabling energy storage similar to the way in which a conventional capacitor acts.
“We use the advantage of the interweaving process and create several such junctions. So, with each junction being able to store a certain amount of electrical energy, all the junctions synchronized are able to store a large amount of energy. This provides high energy density to the device,” Prof. C. Subramaniam, Department of Chemistry, IIT Bombay and corresponding author of the paper points out.
The device has also been employed for lighting up an LED [light-emitting diode]. This can be potentially scaled to provide electrical energy demanded by the application.
…
This image accompanies the paper written by Prof. C. Subramaniam and his team,
A research team from the University of British Columbia (UBC at the Okanagan Campus) joined the pack with a May 16, 2019 news item on ScienceDaily,
Forget the smart watch. Bring on the smart shirt.
Researchers at UBC Okanagan’s School of Engineering have developed a low-cost sensor that can be interlaced into textiles and composite materials. While the research is still new, the sensor may pave the way for smart clothing that can monitor human movement.
“Microscopic sensors are changing the way we monitor machines and humans,” says Hoorfar, lead researcher at the Advanced Thermo-Fluidic Lab at UBC’s Okanagan campus. “Combining the shrinking of technology along with improved accuracy, the future is very bright in this area.”
This ‘shrinking technology’ uses a phenomenon called piezo-resistivity—an electromechanical response of a material when it is under strain. These tiny sensors have shown a great promise in detecting human movements and can be used for heart rate monitoring or temperature control, explains Hoorfar.
Her research, conducted in partnership with UBC Okanagan’s Materials and Manufacturing Research Institute, shows the potential of a low-cost, sensitive and stretchable yarn sensor. The sensor can be woven into spandex material and then wrapped into a stretchable silicone sheath. This sheath protects the conductive layer against harsh conditions and allows for the creation of washable wearable sensors.
While the idea of smart clothing—fabrics that can tell the user when to hydrate, or when to rest—may change the athletics industry, UBC Professor Abbas Milani says the sensor has other uses. It can monitor deformations in fibre-reinforced composite fabrics currently used in advanced industries such as automotive, aerospace and marine manufacturing.
The low-cost stretchable composite sensor has also shown a high sensitivity and can detect small deformations such as yarn stretching as well as out-of-plane deformations at inaccessible places within composite laminates, says Milani, director of the UBC Materials and Manufacturing Research Institute.
The testing indicates that further improvements in its accuracy could be achieved by fine-tuning the sensor’s material blend and improving its electrical conductivity and sensitivity This can eventually make it able to capture major flaws like “fibre wrinkling” during the manufacturing of advanced composite structures such as those currently used in airplanes or car bodies.
“Advanced textile composite materials make the most of combining the strengths of different reinforcement materials and patterns with different resin options,” he says. “Integrating sensor technologies like piezo-resistive sensors made of flexible materials compatible with the host textile reinforcement is becoming a real game-changer in the emerging era of smart manufacturing and current automated industry trends.”
Will there be one winner or will they find CNTs better for one type of wearable tech textile while graphene excels for another type of wearable tech textile?