Category Archives: intellectual property

Canadian copyright quietly extended

As of December 30, 2022, Canadian copyright (one of the three elements of intellectual property; the other two: patents and trademarks) will be extended for another 20 years.

Mike Masnick in his November 29, 2022 posting on Techdirt explains why this is contrary to the intentions for establishing copyright in the first place, Note: Links have been removed,

… it cannot make sense to extend copyright terms retroactively. The entire point of copyright law is to provide a limited monopoly on making copies of the work as an incentive to get the work produced. Assuming the work was produced, that says that the bargain that was struck was clearly enough of an incentive for the creator. They were told they’d receive that period of exclusivity and thus they created the work.

Going back and retroactively extending copyright then serves no purpose. Creators need no incentive for works already created. The only thing it does is steal from the public. That’s because the “deal” setup by governments creating copyright terms is between the public (who is temporarily stripped of their right to share knowledge freely) and the creator. But if we extend copyright term retroactively, the public then has their end of the bargain (“you will be free to share these works freely after such-and-such a date”) changed, with no recourse or compensation.

Canada has quietly done it: extending copyrights on literary, dramatic or musical works and engravings from life of the author plus 50 years year to life of the author plus 70 years. [emphasis mine]

Masnick pointed to a November 23, 2022 posting by Andrea on the Internet Archive Canada blog for how this will affect the Canadian public,

… we now know that this date has been fixed as December 30, 2022, meaning that no new works will enter the Canadian public domain for the next 20 years.

A whole generation of creative works will remain under copyright. This might seem like a win for the estates of popular, internationally known authors, but what about more obscure Canadian works and creators? With circulation over time often being the indicator of ‘value’, many 20th century works are being deselected from physical library collections. …

Edward A. McCourt (1907-1972) is an example of just one of these Canadian creators. Raised in Alberta and a graduate of the University of Alberta, Edward went on to be a Rhodes Scholar in 1932. In 1980, Winnifred Bogaards wrote that:

“[H]e recorded over a period of thirty years his particular vision of the prairies, the region of Canada which had irrevocably shaped his own life. In that time he published five novels and forty-three short stories set (with some exceptions among the earliest stories) in Western Canada, three juvenile works based on the Riel Rebellion, a travel book on Saskatchewan, several radio plays adapted from his western stories, The Canadian West in Fiction (the first critical study of the literature of the prairies), and a biography of the 19th century English soldier and adventurer, Sir William F. Butler… “

In Bogaards’ analysis of his work, “Edward McCourt: A Reassessment” published in the journal Studies in Canadian Literature, she notes that while McCourt has suffered in obscurity, he is often cited along with his contemporaries Hugh MacLennan, Robertson Davies and Irving Layton; Canadian literary stars. Incidentally, we will also wait an additional 20 years for their works to enter the public domain. The work of Rebecca Giblin, Jacob Flynn, and Francois Petitjean, looking at ‘What Happens When Books Enter the Public Domain?’ is relevant here. Their study shows concretely and empirically that extending copyright has no benefit to the public at all, and only benefits a very few wealthy, well known estates and companies. This term extension will not encourage the publishers of McCourt’s works to invest in making his writing available to a new generation of readers.

This 20 year extension can trace its roots to the trade agreement between the US, Mexico, and Canada (USMCA) that replaced the previous North American Free Trade Agreement (NAFTA), as of July 1, 2020. This is made clear in Michael Geist’s May 2, 2022 Law Bytes podcast where he discusses with Lucie Guibault the (then proposed) Canadian extension in the context of international standards,

Lucie Guibault is an internationally renowned expert on international copyright law, a Professor of Law and Associate Dean at Schulich School of Law at Dalhousie University, and the Associate Director of the school’s Law and Technology Institute.

It’s always good to get some context and in that spirit, here’s more from Michael Geist’s May 2, 2022 Law Bytes podcast,

… Despite recommendations from its own copyright review, students, teachers, librarians, and copyright experts to include a registration requirement [emphasis mine] for the additional 20 years of protection, the government chose to extend term without including protection to mitigate against the harms.

Geist’s podcast discussion with Guibault, where she explains what a ‘registration requirement’ is and how it would work plus more, runs for almost 27 mins. (May 2, 2022 Law Bytes podcast). One final comment, visual artists and musicians are also affected by copyright rules.

FrogHeart’s 2022 comes to an end as 2023 comes into view

I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:

Sounds of science

It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.

This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.

Aliens on earth: machinic biology and/or biological machinery?

When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.

However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):

This was the story that shook me,

Are the aliens going to come from outer space or are we becoming the aliens?

Brains (biological and otherwise), AI, & our latest age of anxiety

As we integrate machines into our bodies, including our brains, there are new issues to consider:

  • Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
  • US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting

I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.

Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.

On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,

  • “The “We are AI” series gives citizens a primer on AI” March 23, 2022 posting
  • “Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT” September 16, 2022 posting

These stories feature problems, which aren’t new but seem to be getting more attention,

While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,

  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
  • “AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
  • Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting

Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)

There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,

  • Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
  • Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.

Opposite world (quantum physics in Canada)

Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:

  • “Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
  • “Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
  • “Canada, quantum technology, and a public relations campaign?” December 29, 2022 posting

This one was a bit of a puzzle with regard to placement in this end-of-year review, it’s quantum but it’s also about brainlike computing

It’s getting hot in here

Fusion energy made some news this year.

There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].

BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.

Ukraine, science, war, and unintended consequences

Here’s what you might expect,

These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,

Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]

Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.

In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.

First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.

The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.

Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.

Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022

This one was a surprise for me,

Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,

“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)

Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.

The project is led by Indigenous scholars and activists …

Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.

There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,

  • “Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting

The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,

Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.

… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:

You can read more about it here:

In the rearview mirror

A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.

Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.

If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,

“Inherited nanobionics” makes its debut

The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.

Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.

After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.

With that …

Making resolutions in the dark

Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.

Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.

For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).

I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?

My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”

He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.

Que sera, sera (Whatever will be, will be)

I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.

I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.

I’ve already featured these 2023 science events but just in case you missed them,

  • 2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
  • September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting

Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.

Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.

A spray-on dress with nanoparticles as the base?

Even a month after the fact, this is still fascinating. The magic is in watching the paint/textile get sprayed onto model, Bella Hadid’s body, and watching the liquid transform into a textile. (Note: Ms. Hadid has a minimal amount of clothing at the start),

Fashion designer/scientist, Manel Torres developed the technology, Fabrican, about 20 years ago according to an October 14, 2022 article by Gooseed for complex.com,

Coperni, the Parisian ready-to-wear brand founded by Sébastien Meyer and Arnaud Vaillant, has always focused on tailored minimalism since it launched in 2013. Yet it also strives to take an innovative approach to design that connects its collections with the current fashion moment and pay homage to the past.  

The finale of their Spring/Summer 2023 presentation for Paris Fashion Week, where model Bella Hadid walked onto stage half-naked to get sprayed with a white substance, gave the brand a viral moment. At first glance, most of us thought it was a performance. But after a few minutes, the white shell that appeared on Bella’s body looked like a dress solidified into a texture that almost resembled latex. It wasn’t a body painting, but an actual dress. Charlotte Raymond, Coperni’s Head of Design, even helped style the dress by cutting a slit into the garment and altering the straps to make it an off the shoulder silhouette. The rest is history. Videos of the dress blew up on social media and are now anchored in the digital ether.

The truth is that this magic behind the dress is not new. It has been around for almost two decades.

The innovative technology behind Hadid’s Coperni dress was created by Manel Torres, a Spanish fashion designer turned scientist. Torres has been nicknamed “The Chemist Tailor” because of Fabrican, a liquid tissue made up of polymers, additives, and fiber that turns into a solid nonwoven material when it comes into contact with air. That’s why Fabrican can come out of a spray can to instantly create something like Bella’s Coperni dress. It can also be used to create protective covering for furniture or car interiors. Torres founded his business in 2003 and has been researching the possibility of creating clothes, chairs, and medical patches with just one spray for over 20 years and counting.

His journey started first at the La Escuela de Artes y Técnicas de la Moda in Barcelona, where Torres studied arts with a specialty in fashion design. He then enrolled at the Royal College of Art in London where he graduated with an MA in womenswear. He went on to graduate with a PhD from the Royal College of Art in 2001 by publishing a thesis centered on spray-on fabrics from an aerosol can. It was a collaborative thesis between his school’s fashion department and the chemical engineering department at the Imperial College of London. Torres then started creating his own collections with the first versions of Fabrican fabric. Before Coperni, he presented Fabrican at several runway shows like Science in Style in 2010 and during Moscow Fashion Week in 2011.

Despite Torres’ fashion background, he mostly works with clients within the automobile, medical, and sportswear industry. “I’m a fashion guy so my wish is that this industry starts to invest more in technology and not rely so much on branding,” says Torres when sharing his views on the fashion industry a couple days after the Coperni moment.

Torres’ drive to push Fabrican into the fashion business has also garnered the interest of other industries outside of apparel. He says it has made him realize that there are possibilities for new production models in all aspects of design. “This is completely a new idea so it requires a completely new approach. That in an industry like fashion, and in any industry in general, is going to take some time,” says Torres. He is patient and persistent about achieving his number one goal, which is to make Fabrican available for everybody.

Additionally, since Fabrican is plant-based and composed of natural fibers, it can be used as an alternative to animal-derived leathers. The fabric can also be washed and reused and sprayed on to again to extend the garment. Torres hopes to grow Fabrican to an industrial scale with the help of a robotic arm spray system that could quickly create complex forms in a very precise way and operate 24 hours a day, which could significantly reduce human labor and product costs associated with garment production. The durability of the fabric is also something that Torres assures to be “very similar to the clothes we use daily but needs to be improved.” He reveals that he’s currently working with the German government to apply Fabrican technology to produce uniforms.

…  

For the curious, there are more images and videos embedded, as well as, the links I’ve have eliminated from the excerpts, in Gooseed’s October 14, 2022 article.

Eglė Radžiūtė’s October 3 (?), 2022 article for boredpanda.com fills out the fashion commentary with a bit more detail about the science, Note: Links have been removed,

In about 9 minutes, Bella’s body was engulfed in a light layer of fabric. Once the fabric had a second to settle, Coperni’s Head of Design Charlotte Raymond came up to wipe off the excess and shape the dress into its final form. Lowering the shoulder straps, cutting the bottom to mid-calf length, and adding a slit on Bella’s left leg, Charlotte completed something that was out of this world.

The segment was not previously rehearsed with Bella due to her Paris Fashion Week schedule, adding to the magic, as well as showing off the professionalism of the dress’s engineers, the designers, and Bella herself. The night before the show, a model stood in for Bella, but she couldn’t control her shivering on the chilly runway as the cold material hit her skin.

“I was so nervous,” Bella said backstage, as it would have been her first experience being sprayed. But she didn’t let it show. She was steely and delicate, occasionally raising her arms above her head with an elegant flair, or offering a little smile at the people working on her. “I kind of just became the character, whoever she is.”

Wasn’t it cold up there? “Honey, cold is an understatement,” Bella said, as reported by the NYTimes. “I really blacked out.” Yet as soon as she left the runway, she felt like the performance had been a “pinnacle moment” in her career.

Let’s dive into the science behind the dress. Partnering with Doctor Manel Torres, Founder and Managing Director of Fabrican Ltd, they utilized a spray-on fabric that, once sprayed, dries to create a wearable, non-woven textile. It can be made using different types of fibers: from natural to synthetic, including wool, cotton, nylon, cellulose, and carbon nanofibers. [emphasis mine]

Based in London [UK], at the London Bioscience Innovation Center, Doctor Torres has been working on this multifaceted piece of technology since 2003. A liquid suspension—a finely distributed solid in a liquid, which is not dissolved—is applied via spray gun or aerosol to a surface, creating a fabric. The cross-linking of fibers, which adhere to one another, creates an instant non-woven fabric.

The future-forward invention may be used for more than just creating intricate fashion; they believe it can revolutionize multiple industries. As stated on BBC’s The Imagineers, the fabric is sterile and thus can be made into bandages. It can be made to set hard and, thus, could be used as a cast for broken bones. But perhaps most crucially, the fabric absorbs oil, and so it could be used to clean up after oil tanker disasters.

Whilst in pictures the dress looked to be made of a kind of silk or cotton, those who got close enough to touch it discovered that it felt soft but elastic, bumpy like a sponge. According to Arnaud, the dress was taken off like any other tight, slightly stretchy one: a process of peeling off and shimmying out. It can be hung and washed, or put back into the bottle of its original solution to regenerate.

Coperni is an ultra-modern Parisian ready-to-wear and accessories brand designed by Sébastien Meyer and Arnaud Vaillant. Established in 2013, the pair have been on a mission to find the intersection between fashion and technology, “marrying exhaustive origami-like technique with a neat, ‘sportif’ silhouette.”

You can better see the dress’s texture in this image,

Image credits: bellahadid [downloaded from https://www.boredpanda.com/bella-hadid-coperni-spray-on-dress/?utm_source=duckduckgo&utm_medium=referral&utm_campaign=organic]

Health concerns

Do read the comments at the end of Eglė Radžiūtė’s October 3 (?), 2022 article. Most are admiring but there is a cautionary note from a construction painter noting that no one wore any “respiratory protective devices.” An ‘industrial hygienist’ seconded the the painter’s concern “that stuff is in their lungs,” as would anyone concerned with lung health.

The science of a spray-on textile

You can glean some information from his patent filings (where you’ll find mention of nanosilica but not of the carbon nanofibers mentioned in Radžiūtė’s article), Non-woven fabric Patent number: 8124549; Non-woven fabric Patent number: 8088315; Non-Woven Fabric Publication number: 20100286583; Non-Woven Fabric Publication number: 20090036014; and Non-woven fabric Publication number: 20050222320 on justia.com. The full list of Torres’ patents is here.

I’m guessing there’s more than one kind of engineered nanomaterial to be found in Torres’ mixtures but he’s pretty careful about spilling too much information. Charlotte Hu in her October 4, 2022 article for Popular Science helps to decode further the information in the patents (Note: Links have been removed),

This instantaneously materialized dress is not a magic trick, but a testament to innovations in material science more than two decades in the making. The man behind the creation is Manel Torres, who in 2003 created the substance used on Hadid, Fabrican (presumably a portmanteau of the phrase “fabric in a can”). His inspiration? Silly string and spiderwebs. His idea was to elevate the coarse cords of the silly string into a finer fabric that could be dispersed through a mist. Torres explained in a 2013 Ted Talk that when this spray-on fabric comes in contact with air, it turns into a solid material that’s stretchy and feels like suede. 

What exactly is in Fabrican? According to the patents granted to the company, the liquid fabric is made up of a suspension of liquid polymers (large molecules bonded together), additives, binders like natural latex, cross-linked natural and synthetic fibers, and a fast-evaporating solvent like acetone. The fibers can be polyester, polypropylene, cotton, linen, or wool. 

Torres added that they can easily form the material around 3D molds or patterns and tweak the textures, so they can get something that’s fleece-like, paper-like, lace-like, or rubber-like. He imagined that people could go into a booth, customize their dress, and instantly have it 3D printed onto their bodies. The spray could even be used for spot repairs on existing clothing.  

… Fabrican states on its website that it uses “fibres recycled from discarded clothes and other fabrics. The technology can also utilise biodegradable fibres and binders in place of fossil-based polymers to reduce the carbon footprint of material and manufacturing.” Additionally, the company said that “at the end of their useful life, sprayed fabrics can be re-dissolved and sprayed anew.”  

For the curious, here’s the Fabrican Ltd. website, the Coperni website, and a Wikipedia entry for Silly String.

I have another story about producing something in midair in a May 17, 2016 posting titled: Printing in midair. That was about 3D printing metallic devices in midair.

H/t to the Celebrity Social Media October 3, 2022 posting (keep scrolling down about 75% of the way down) on Laineygossip.com and to Rosemary Hurst because her comments about the dress led me to Charlotte Hu’s article. *ETA: November 4, 2022 at 1550 PT: Rosemary compared to a process for handmaking paper.*

Keeping your hands cool and your coffee hot with a cup cozy inspired by squid skin

Researchers in the Department of Chemical and Biomolecular Engineering at the University of California, Irvine have invented a squid-skin inspired material that can wrap around a coffee cup to shield sensitive fingers from heat. They have also created a method for economically mass producing the adaptive fabric, making possible a wide range of uses. Credit: Melissa Sung Courtesy: University of California Irvine

I love that image. Melissa Sung, thank you. Sadly, squid-inspired cup cozies aren’t available yet according to a March 28, 2022 news item on phys.org but researchers are working on it, Note: Links have been removed,

In the future, you may have a squid to thank for your coffee staying hot on a cold day. Drawing inspiration from cephalopod skin, engineers at the University of California, Irvine invented an adaptive composite material that can insulate beverage cups, restaurant to-go bags, parcel boxes and even shipping containers.

The innovation is an infrared-reflecting metallized polymer film developed in the laboratory of Alon Gorodetsky, UCI associate professor of chemical and biomolecular engineering. In a paper published today [March 28, 2022] in Nature Sustainability, Gorodetsky and his team members describe a large-area composite material that regulates heat by means of reconfigurable metal structures that can reversibly separate from one another and come back together under different strain levels.

“The metal islands in our composite material are next to one another when the material is relaxed and become separated when the material is stretched, allowing for control of the reflection and transmission of infrared light or heat dissipation,” said Gorodetsky. “The mechanism is analogous to chromatophore expansion and contraction in a squid’s skin, which alters the reflection and transmission of visible light.”

Chromatophore size changes help squids communicate and camouflage their bodies to evade predators and hide from prey. Gorodetsky said by mimicking this approach, his team has enabled “tunable thermoregulation” in their material, which can lead to improved energy efficiency and protect sensitive fingers from hot surfaces.

A March 28, 2022 University of California at Irvine (UCI) news release (also on EurekAlert), which originated the news item, delves further into this squid-inspired research and its commercialization,

A key breakthrough of this project was the UCI researchers’ development of a cost-effective production method of their composite material at application-relevant quantities. The copper and rubber raw materials start at about a dime per square meter with the costs reduced further by economies of scale, according to the paper. The team’s fabrication technique involves depositing a copper film onto a reusable substrate such as aluminum foil and then spraying multiple polymer layers onto the copper film, all of which can be done in nearly any batch size imaginable.

“The combined manufacturing strategy that we have now perfected in our lab is a real game changer,” said Gorodetsky. “We have been working with cephalopod-inspired adaptive materials and systems for years but previously have only been able to fabricate them over relatively small areas. Now there is finally a path to making this stuff roll-by-roll in a factory.”

The developed strategy and economies of scale should make it possible for the composite material to be used in a wide range of applications, from the coffee cup cozy up to tents, or in any container in which tunable temperature regulation is desired.

The invention will go easy on the environment due its environmental sustainability, said lead author Mohsin Badshah, a former UCI postdoctoral scholar in chemical and biomolecular engineering. “The composite material can be recycled in bulk by removing the copper with vinegar and using established commercial methods to repurpose the remaining stretchable polymer,” he said.

The team conducted universally relatable coffee cup testing in their laboratory on the UCI campus, where they proved they could control the cooling of the coffee. They were able to accurately and theoretically predict and then experimentally confirm the changes in temperature for the beverage-filled cups. The was also able to achieve a 20-fold modulation of infrared radiation transmittance and a 30-fold regulation of thermal fluxes under standardized testing conditions. The stable material even worked well for high levels of mechanical deformation and after repeated mechanical cycling.

“There is an enormous array of applications for this material,” said Gorodetsky. “Think of all the perishable goods that have been delivered to people’s homes during the pandemic. Any package that Amazon or another company sends that needs to be temperature-controlled can use a lining made from our squid-inspired adaptive composite material. Now that we can make large sheets of it at a time, we have something that can benefit many aspects of our lives.”

Joining Gorodetsky and Badshah on this project were Erica Leung, who recently graduated UCI with a Ph.D. in chemical and biomolecular engineering, and Aleksandra Strzelecka and Panyiming Liu, who are current UCI graduate students. The research was funded by the Defense Advanced Research Projects Agency, the Advanced Research Projects Agency – Energy and the Air Force Office of Scientific Research. A provisional patent for the technology and manufacturing process has been applied for.

Here’s a link to and a citation for the paper,

Scalable manufacturing of sustainable packaging materials with tunable thermoregulability by Mohsin Ali Badshah, Erica M. Leung, Panyiming Liu, Aleksandra Anna Strzelecka & Alon A. Gorodetsky. Nature Sustainability (2022) DOI: https://doi.org/10.1038/s41893-022-00847-2 Published: 28 March 2022

This paper is behind a paywall.

AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk

Who is an artist? What is an artist? Can everyone be an artist? These are the kinds of questions you can expect with the rise of artificially intelligent artists/collaborators. Of course, these same questions have been asked many times before the rise of AI (artificial intelligence) agents/programs in the field of visual art. Each time the questions are raised is an opportunity to examine our beliefs from a different perspective. And, not to be forgotten, there are questions about money.

The shock

First, the ‘art’,

The winning work. Colorado State Fair 2022. Screengrab from Discord [downloaded from https://www.artnews.com/art-news/news/colorado-state-fair-ai-generated-artwork-controversy-1234638022/]

Shanti Escalante-De Mattei’s September 1, 2022 article for ArtNews.com provides an overview of the latest AI art controversy (Note: A link has been removed),

The debate around AI art went viral once again when a man won first place at the Colorado State Fair’s art competition in the digital category with a work he made using text-to-image AI generator Midjourney.

Twitter user and digital artist Genel Jumalon tweeted out a screenshot from a Discord channel in which user Sincarnate, aka game designer Jason Allen, celebrated his win at the fair. Jumalon wrote, “Someone entered an art competition with an AI-generated piece and won the first prize. Yeah that’s pretty fucking shitty.”

The comments on the post range from despair and anger as artists, both digital and traditional, worry that their livelihoods might be at stake after years of believing that creative work would be safe from AI-driven automation. [emphasis mine]

Rachel Metz’s September 3, 2022 article for CNN provides more details about how the work was generated (Note: Links have been removed),

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August [2022], Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

The newness of these tools, how they’re used to produce images, and, in some cases, the gatekeeping for access to some of the most powerful ones has led to debates about whether they can truly make art or assist humans in making art.

This came into sharp focus for Allen not long after his win. Allen had posted excitedly about his win on Midjourney’s Discord server on August 25 [2022], along with pictures of his three entries; it went viral on Twitter days later, with many artists angered by Allen’s win because of his use of AI to create the image, as a story by Vice’s Motherboard reported earlier this week.

“This sucks for the exact same reason we don’t let robots participate in the Olympics,” one Twitter user wrote.

“This is the literal definition of ‘pressed a few buttons to make a digital art piece’,” another Tweeted. “AI artwork is the ‘banana taped to the wall’ of the digital world now.”

Yet while Allen didn’t use a paintbrush to create his winning piece, there was plenty of work involved, he said.

“It’s not like you’re just smashing words together and winning competitions,” he said.

You can feed a phrase like “an oil painting of an angry strawberry” to Midjourney and receive several images from the AI system within seconds, but Allen’s process wasn’t that simple. To get the final three images he entered in the competition, he said, took more than 80 hours.

First, he said, he played around with phrasing that led Midjourney to generate images of women in frilly dresses and space helmets — he was trying to mash up Victorian-style costuming with space themes, he said. Over time, with many slight tweaks to his written prompt (such as to adjust lighting and color harmony), he created 900 iterations of what led to his final three images. He cleaned up those three images in Photoshop, such as by giving one of the female figures in his winning image a head with wavy, dark hair after Midjourney had rendered her headless. Then he ran the images through another software program called Gigapixel AI that can improve resolution and had the images printed on canvas at a local print shop.

Ars Technica has run a number of articles on the subject of Art and AI, Benj Edwards in an August 31, 2022 article seems to have been one of the first to comment on Jason Allen’s win (Note 1: Links have been removed; Note 2: Look at how Edwards identifies Jason Allen as an artist),

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday [August 31, 2022?] based on a viral tweet.

Allen’s victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It’s a hot debate that Wired covered in July [2022].

It’s worth noting that the invention of the camera in the 1800s prompted similar criticism related to the medium of photography, since the camera seemingly did all the work compared to an artist that labored to craft an artwork by hand with a brush or pencil. Some feared that painters would forever become obsolete with the advent of color photography. In some applications, photography replaced more laborious illustration methods (such as engraving), but human fine art painters are still around today.

Benj Edwards in a September 12, 2022 article for Ars Technica examines how some art communities are responding (Note: Links have been removed),

Confronted with an overwhelming amount of artificial-intelligence-generated artwork flooding in, some online art communities have taken dramatic steps to ban or curb its presence on their sites, including Newgrounds, Inkblot Art, and Fur Affinity, according to Andy Baio of Waxy.org.

Baio, who has been following AI art ethics closely on his blog, first noticed the bans and reported about them on Friday [Sept. 9, 2022?]. …

The arrival of widely available image synthesis models such as Midjourney and Stable Diffusion has provoked an intense online battle between artists who view AI-assisted artwork as a form of theft (more on that below) and artists who enthusiastically embrace the new creative tools.

… a quickly evolving debate about how art communities (and art professionals) can adapt to software that can potentially produce unlimited works of beautiful art at a rate that no human working without the tools could match.

A few weeks ago, some artists began discovering their artwork in the Stable Diffusion data set, and they weren’t happy about it. Charlie Warzel wrote a detailed report about these reactions for The Atlantic last week [September 7, 2022]. With battle lines being drawn firmly in the sand and new AI creativity tools coming out steadily, this debate will likely continue for some time to come.

Filthy lucre becomes more prominent in the conversation

Lizzie O’Leary in a September 12, 2022 article for Fast Company presents a transcript of an interview (from the TBD podcast) she conducted with Drew Harwell, tech reporter covering A.I. for Washington Post) about the ‘Jason Allen’ win,

I’m struck by how quickly these art A.I.s are advancing. DALL-E was released in January of last year and there were some pretty basic images. And then, a year later, DALL-E 2 is using complex, faster methods. Midjourney, the one Jason Allen used, has a feature that allows you to upscale and downscale images. Where is this sudden supply and demand for A.I. art coming from?

You could look back to five years ago when they had these text-to-image generators and the output would be really crude. You could sort of see what the A.I. was trying to get at, but we’ve only really been able to cross that photorealistic uncanny valley in the last year or so. And I think the things that have contributed to that are, one, better data. You’re seeing people invest a lot of money and brainpower and resources into adding more stuff into bigger data sets. We have whole groups that are taking every image they can get on the internet. Billions, billions of images from Pinterest and Amazon and Facebook. You have bigger data sets, so the A.I. is learning more. You also have better computing power, and those are the two ingredients to any good piece of A.I. So now you have A.I. that is not only trained to understand the world a little bit better, but it can now really quickly spit out a very finely detailed generated image.

Is there any way to know, when you look at a piece of A.I. art, what images it referenced to create what it’s doing? Or is it just so vast that you can’t kind of unspool it backward?

When you’re doing an image that’s totally generated out of nowhere, it’s taking bits of information from billions of images. It’s creating it in a much more sophisticated way so that it’s really hard to unspool.

Art generated by A.I. isn’t just a gee-whiz phenomenon, something that wins prizes, or even a fascinating subject for debate—it has valuable commercial uses, too. Some that are a little frightening if you’re, say, a graphic designer.

You’re already starting to see some of these images illustrating news articles, being used as logos for companies, being used in the form of stock art for small businesses and websites. Anything where somebody would’ve gone and paid an illustrator or graphic designer or artist to make something, they can now go to this A.I. and create something in a few seconds that is maybe not perfect, maybe would be beaten by a human in a head-to-head, but is good enough. From a commercial perspective, that’s scary, because we have an industry of people whose whole job is to create images, now running up against A.I.

And the A.I., again, in the last five years, the A.I. has gotten better and better. It’s still not perfect. I don’t think it’ll ever be perfect, whatever that looks like. It processes information in a different, maybe more literal, way than a human. I think human artists will still sort of have the upper hand in being able to imagine things a little more outside of the box. And yet, if you’re just looking for three people in a classroom or a pretty simple logo, you’re going to go to A.I. and you’re going to take potentially a job away from a freelancer whom you would’ve given it to 10 years ago.

I can see a use case here in marketing, in advertising. The A.I. doesn’t need health insurance, it doesn’t need paid vacation days, and I really do wonder about this idea that the A.I. could replace the jobs of visual artists. Do you think that is a legitimate fear, or is that overwrought at this moment?

I think it is a legitimate fear. When something can mirror your skill set, not 100 percent of the way, but enough of the way that it could replace you, that’s an issue. Do these A.I. creators have any kind of moral responsibility to not create it because it could put people out of jobs? I think that’s a debate, but I don’t think they see it that way. They see it like they’re just creating the new generation of digital camera, the new generation of Photoshop. But I think it is worth worrying about because even compared with cameras and Photoshop, the A.I. is a little bit more of the full package and it is so accessible and so hard to match in terms. It’s really going to be up to human artists to find some way to differentiate themselves from the A.I.

This is making me wonder about the humans underneath the data sets that the A.I. is trained on. The criticism is, of course, that these businesses are making money off thousands of artists’ work without their consent or knowledge and it undermines their work. Some people looked at the Stable Diffusion and they didn’t have access to its whole data set, but they found that Thomas Kinkade, the landscape painter, was the most referenced artist in the data set. Is the A.I. just piggybacking? And if it’s not Thomas Kinkade, if it’s someone who’s alive, are they piggybacking on that person’s work without that person getting paid?

Here’s a bit more on the topic of money and art in a September 19, 2022 article by John Herrman for New York Magazine. First, he starts with the literary arts, Note: Links have been removed,

Artificial-intelligence experts are excited about the progress of the past few years. You can tell! They’ve been telling reporters things like “Everything’s in bloom,” “Billions of lives will be affected,” and “I know a person when I talk to it — it doesn’t matter whether they have a brain made of meat in their head.”

We don’t have to take their word for it, though. Recently, AI-powered tools have been making themselves known directly to the public, flooding our social feeds with bizarre and shocking and often very funny machine-generated content. OpenAI’s GPT-3 took simple text prompts — to write a news article about AI or to imagine a rose ceremony from The Bachelor in Middle English — and produced convincing results.

Deepfakes graduated from a looming threat to something an enterprising teenager can put together for a TikTok, and chatbots are occasionally sending their creators into crisis.

More widespread, and probably most evocative of a creative artificial intelligence, is the new crop of image-creation tools, including DALL-E, Imagen, Craiyon, and Midjourney, which all do versions of the same thing. You ask them to render something. Then, with models trained on vast sets of images gathered from around the web and elsewhere, they try — “Bart Simpson in the style of Soviet statuary”; “goldendoodle megafauna in the streets of Chelsea”; “a spaghetti dinner in hell”; “a logo for a carpet-cleaning company, blue and red, round”; “the meaning of life.”

This flood of machine-generated media has already altered the discourse around AI for the better, probably, though it couldn’t have been much worse. In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction [emphasis mine]. Early controversies have cut to the chase: Is the guy who entered generated art into a fine-art contest in Colorado (and won!) an asshole? Artists and designers who already feel underappreciated or exploited in their industries — from concept artists in gaming and film and TV to freelance logo designers — are understandably concerned about automation. Some art communities and marketplaces have banned AI-generated images entirely.

Requests are effectively thrown into “a giant swirling whirlpool” of “10,000 graphics cards,” Holz [David Holz, Midjourney founder] said, after which users gradually watch them take shape, gaining sharpness but also changing form as Midjourney refines its work.

This hints at an externality beyond the worlds of art and design. “Almost all the money goes to paying for those machines,” Holz said. New users are given a small number of free image generations before they’re cut off and asked to pay; each request initiates a massive computational task, which means using a lot of electricity.

High compute costs [emphasis mine] — which are largely energy costs — are why other services have been cautious about adding new users. …

Another Midjourney user, Gila von Meissner, is a graphic designer and children’s-book author-illustrator from “the boondocks in north Germany.” Her agent is currently shopping around a book that combines generated images with her own art and characters. Like Pluckebaum [Brian Pluckebaum who works in automotive-semiconductor marketing and designs board games], she brought up the balance of power with publishers. “Picture books pay peanuts,” she said. “Most illustrators struggle financially.” Why not make the work easier and faster? “It’s my character, my edits on the AI backgrounds, my voice, and my story.” A process that took months now takes a week, she said. “Does that make it less original?”

User MoeHong, a graphic designer and typographer for the state of California, has been using Midjourney to make what he called generic illustrations (“backgrounds, people at work, kids at school, etc.”) for government websites, pamphlets, and literature: “I get some of the benefits of using custom art — not that we have a budget for commissions! — without the paying-an-artist part.” He said he has mostly replaced stock art, but he’s not entirely comfortable with the situation. “I have a number of friends who are commercial illustrators, and I’ve been very careful not to show them what I’ve made,” he said. He’s convinced that tools like this could eventually put people in his trade out of work. “But I’m already in my 50s,” he said, “and I hope I’ll be gone by the time that happens.”

Fan club

The last article I’m featuring here is a September 15, 2021 piece by Agnieszka Cichocka for DailyArt, which provides good, brief descriptions of algorithms, generative creative networks, machine learning, artificial neural networks, and more. She is an enthusiast (Note: Links have been removed),

I keep wondering if Leonardo da Vinci, who, in my opinion, was the most forward thinking artist of all time, would have ever imagined that art would one day be created by AI. He worked on numerous ideas and was constantly experimenting, and, although some were failures, he persistently tried new products, helping to move our world forward. Without such people, progress would not be possible. 

Machine Learning

As humans, we learn by acquiring knowledge through observations, senses, experiences, etc. This is similar to computers. Machine learning is a process in which a computer system learns how to perform a task better in two ways—either through exposure to environments that provide punishments and rewards (reinforcement learning) or by training with specific data sets (the system learns automatically and improves from previous experiences). Both methods help the systems improve their accuracy. Machines then use patterns and attempt to make an accurate analysis of things they have not seen before. To give an example, let’s say we feed the computer with thousands of photos of a dog. Consequently, it can learn what a dog looks like based on those. Later, even when faced with a picture it has never seen before, it can tell that the photo shows a dog.

If you want to see some creative machine learning experiments in art, check out ML x ART. This is a website with hundreds of artworks created using AI tools.

Some thoughts

As the saying goes “a picture is worth a thousand words” and, now, It seems that pictures will be made from words or so suggests the example of Jason M. Allen feeding prompts to the AI system Midjourney.

I suspect (as others have suggested) that in the end, artists who use AI systems will be absorbed into the art world in much the same way as artists who use photography, or are considered performance artists and/or conceptual artists, and/or use video have been absorbed. There will be some displacements and discomfort as the questions I opened this posting with (Who is an artist? What is an artist? Can everyone be an artist?) are passionately discussed and considered. Underlying many of these questions is the issue of money.

The impact on people’s livelihoods is cheering or concerning depending on how the AI system is being used. Herrman’s September 19, 2022 article highlights two examples that focus on graphic designers. Gila von Meissner, the illustrator and designer, who uses her own art to illustrate her children’s books in a faster, more cost effective way with an AI system and MoeHong, a graphic designer for the state of California, who uses an AI system to make ‘customized generic art’ for which the state government doesn’t have to pay.

So far, the focus has been on Midjourney and other AI agents that have been created by developers for use by visual artists and writers. What happens when the visual artist or the writer is the developer? A September 12, 2022 article by Brandon Scott Roye for Cool Hunting approaches the question (Note: Links have been removed),

Mario Klingemann and Sasha Stiles on Semi-Autonomous AI Artists

An artist and engineer at the forefront of generating AI artwork, Mario Klingemann and first-generation Kalmyk-American poet, artist and researcher Sasha Stiles both approach AI from a more human, personal angle. Creators of semi-autonomous systems, both Klingemann and Stiles are the minds behind Botto and Technelegy, respectively. They are both artists in their own right, but their creations are too. Within web3, the identity of the “artist” who creates with visuals and the “writer” who creates with words is enjoying a foundational shift and expansion. Many have fashioned themselves a new title as “engineer.”

Based on their primary identities as an artist and poet, Klingemann and Stiles face the conundrum of becoming engineers who design the tools, rather than artists responsible for the final piece. They now have the ability to remove themselves from influencing inputs and outputs.

If you have time, I suggest reading Roye’s September 12, 2022 article as it provides some very interesting ideas although I don’t necessarily agree with them, e.g., “They now have the ability to remove themselves from influencing inputs and outputs.” Anyone who’s following the ethics discussion around AI knows that biases are built into the algorithms whether we like it or not. As for artists and writers calling themselves ‘engineers’, they may get a little resistance from the engineering community.

As users of open source software, Klingemann and Stiles should not have to worry too much about intellectual property. However, it seems copyright for the actual works and patents for the software could raise some interesting issues especially since money is involved.

In a March 10, 2022 article by Shraddha Nair for Stir World, Klingemann claims to have made over $1M from auctions of Botto’s artworks. it’s not clear to me where Botto obtains its library of images for future use (which may signal a potential problem); Stiles’ Technelegy creates poems from prompts using its library of her poems. (For the curious, I have an August 30, 2022 post “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” which explores some of the issues around patents.)

Who gets the patent and/or the copyright? Assuming you and I are employing machine learning to train our AI agents separately, could there be an argument that if my version of the AI is different than yours and proves more popular with other content creators/ artists that I should own/share the patent to the software and rights to whatever the software produces?

Getting back to Herrman’s comment about high compute costs and energy, we seem to have an insatiable appetite for energy and that is not only a high cost financially but also environmentally.

Botto exhibition

Here’s more about Klingemann’s artist exhibition by Botto (from an October 6, 2022 announcement received via email),

Mario Klingemann is a pioneering figurehead in the field of AI art,
working deep in the field of Machine Learning. Governed by a community
of 5,000 people, Klingemann developed Botto around an idea of creating
an autonomous entity that is able to be creative and co-creative.
Inspired by Goethe’s artificial man in Faust, Botto is a genderless AI
entity that is guided by an international community and art historical
trends. Botto creates 350 art pieces per week that are presented to its
community. Members of the community give feedback on these art fragments
by voting, expressing their individual preferences on what is
aesthetically pleasing to them. Then collectively the votes are used as
feedback for Botto’s generative algorithm, dictating what direction
Botto should take in its next series of art pieces.

The creative capacity of its algorithm is far beyond the capacities of
an individual to combine and find relationships within all the
information available to the AI. Botto faces similar issues as a human
artist, and it is programmed to self-reflect and ask, “I’ve created
this type of work before. What can I show them that’s different this
week?”

Once a week, Botto auctions the art fragment with the most votes on
SuperRare. All proceeds from the auction go back to the community. The
AI artist auctioned its first three pieces, Asymmetrical Liberation,
Scene Precede, and Trickery Contagion for more than $900,000 dollars,
the most successful AI artist premiere. Today, Botto has produced
upwards of 22 artworks and current sales have generated over $2 million
in total
[emphasis mine].

From March 2022 when Botto had made $1M to October 2022 where it’s made over $2M. It seems Botto is a very financially successful artist.

Botto: A Whole Year of Co-Creation

This exhibition (October 26 – 30, 2022) is being held in London, England at this location:

The Department Store, Brixton 248 Ferndale Road London SW9 8FR United Kingdom

Enjoy!

Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?

A couple of Australian academics have written a comment for the journal Nature, which bears the intriguing subtitle: “The patent system assumes that inventors are human. Inventions devised by machines require their own intellectual property law and an international treaty.” (For the curious, I’ve linked to a few of my previous posts touching on intellectual property [IP], specifically the patent’s fraternal twin, copyright at the end of this piece.)

Before linking to the comment, here’s the May 27, 2022 University of New South Wales (UNCSW) press release (also on EurekAlert but published May 30, 2022) which provides an overview of their thinking on the subject, Note: Links have been removed,

It’s not surprising these days to see new inventions that either incorporate or have benefitted from artificial intelligence (AI) in some way, but what about inventions dreamt up by AI – do we award a patent to a machine?

This is the quandary facing lawmakers around the world with a live test case in the works that its supporters say is the first true example of an AI system named as the sole inventor.

In commentary published in the journal Nature, two leading academics from UNSW Sydney examine the implications of patents being awarded to an AI entity.

Intellectual Property (IP) law specialist Associate Professor Alexandra George and AI expert, Laureate Fellow and Scientia Professor Toby Walsh argue that patent law as it stands is inadequate to deal with such cases and requires legislators to amend laws around IP and patents – laws that have been operating under the same assumptions for hundreds of years.

The case in question revolves around a machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) created by Dr Stephen Thaler, who is president and chief executive of US-based AI firm Imagination Engines. Dr Thaler has named DABUS as the inventor of two products – a food container with a fractal surface that helps with insulation and stacking, and a flashing light for attracting attention in emergencies.

For a short time in Australia, DABUS looked like it might be recognised as the inventor because, in late July 2021, a trial judge accepted Dr Thaler’s appeal against IP Australia’s rejection of the patent application five months earlier. But after the Commissioner of Patents appealed the decision to the Full Court of the Federal Court of Australia, the five-judge panel upheld the appeal, agreeing with the Commissioner that an AI system couldn’t be named the inventor.

A/Prof. George says the attempt to have DABUS awarded a patent for the two inventions instantly creates challenges for existing laws which has only ever considered humans or entities comprised of humans as inventors and patent-holders.

“Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognised as a legal person,” she says.

Ownership is crucial to IP law. Without it there would be little incentive for others to invest in the new inventions to make them a reality.

“Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?” asks A/Prof. George.

For obvious reasons

Prof. Walsh says what makes AI systems so different to humans is their capacity to learn and store so much more information than an expert ever could. One of the requirements of inventions and patents is that the product or idea is novel, not obvious and is useful.

“There are certain assumptions built into the law that an invention should not be obvious to a knowledgeable person in the field,” Prof. Walsh says.

“Well, what might be obvious to an AI won’t be obvious to a human because AI might have ingested all the human knowledge on this topic, way more than a human could, so the nature of what is obvious changes.”

Prof. Walsh says this isn’t the first time that AI has been instrumental in coming up with new inventions. In the area of drug development, a new antibiotic was created in 2019 – Halicin – that used deep learning to find a chemical compound that was effective against drug-resistant strains of bacteria.

“Halicin was originally meant to treat diabetes, but its effectiveness as an antibiotic was only discovered by AI that was directed to examine a vast catalogue of drugs that could be repurposed as antibiotics. So there’s a mixture of human and machine coming into this discovery.”

Prof. Walsh says in the case of DABUS, it’s not entirely clear whether the system is truly responsible for the inventions.

“There’s lots of involvement of Dr Thaler in these inventions, first in setting up the problem, then guiding the search for the solution to the problem, and then interpreting the result,” Prof. Walsh says.

“But it’s certainly the case that without the system, you wouldn’t have come up with the inventions.”

Change the laws

Either way, both authors argue that governing bodies around the world will need to modernise the legal structures that determine whether or not AI systems can be awarded IP protection. They recommend the introduction of a new ‘sui generis’ form of IP law – which they’ve dubbed ‘AI-IP’ – that would be specifically tailored to the circumstances of AI-generated inventiveness. This, they argue, would be more effective than trying to retrofit and shoehorn AI-inventiveness into existing patent laws.

Looking forward, after examining the legal questions around AI and patent law, the authors are currently working on answering the technical question of how AI is going to be inventing in the future.

Dr Thaler has sought ‘special leave to appeal’ the case concerning DABUS to the High Court of Australia. It remains to be seen whether the High Court will agree to hear it. Meanwhile, the case continues to be fought in multiple other jurisdictions around the world.

Here’s a link to and a citation for the paper,

Artificial intelligence is breaking patent law by Alexandra George & Toby Walsh. Nature (Nature) COMMENT ISSN 1476-4687 (online) 24 May 2022 ISSN 0028-0836 (print) Vol 605 26 May 2022 pp. 616-18 DOI: 10.1038/d41586-022-01391-x

This paper appears to be open access.

The Journey

DABIUS has gotten a patent in one jurisdiction, from an August 8, 2021 article on brandedequity.com,

The patent application listing DABUS as the inventor was filed in patent offices around the world, including the US, Europe, Australia, and South Afica. But only South Africa granted the patent (Australia followed suit a few days later after a court judgment gave the go-ahard [and rejected it several months later]).

Natural person?

This September 27, 2021 article by Miguel Bibe for Inventa covers some of the same ground adding some some discussion of the ‘natural person’ problem,

The patent is for “a food container based on fractal geometry”, and was accepted by the CIPC [Companies and Intellectual Property Commission] on June 24, 2021. The notice of issuance was published in the July 2021 “Patent Journal”.  

South Africa does not have a substantive patent examination system and, instead, requires applicants to merely complete a filing for their inventions. This means that South Africa patent laws do not provide a definition for “inventor” and the office only proceeds with a formal examination in order to confirm if the paperwork was filled correctly.

… according to a press release issued by the University of Surrey: “While patent law in many jurisdictions is very specific in how it defines an inventor, the DABUS team is arguing that the status quo is not fit for purpose in the Fourth Industrial Revolution.”

On the other hand, this may not be considered as a victory for the DABUS team since several doubts and questions remain as to who should be considered the inventor of the patent. Current IP laws in many jurisdictions follow the traditional term of “inventor” as being a “natural person”, and there is no legal precedent in the world for inventions created by a machine.

August 2022 update

Mike Masnick in an August 15, 2022 posting on Techdirt provides the latest information on Stephen Thaler’s efforts to have patents and copyrights awarded to his AI entity, DABUS,

Stephen Thaler is a man on a mission. It’s not a very good mission, but it’s a mission. He created something called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) and claims that it’s creating things, for which he has tried to file for patents and copyrights around the globe, with his mission being to have DABUS named as the inventor or author. This is dumb for many reasons. The purpose of copyright and patents are to incentivize the creation of these things, by providing to the inventor or author a limited time monopoly, allowing them to, in theory, use that monopoly to make some money, thereby making the entire inventing/authoring process worthwhile. An AI doesn’t need such an incentive. And this is why patents and copyright only are given to persons and not animals or AI.

… Thaler’s somewhat quixotic quest continues to fail. The EU Patent Office rejected his application. The Australian patent office similarly rejected his request. In that case, a court sided with Thaler after he sued the Australian patent office, and said that his AI could be named as an inventor, but thankfully an appeals court set aside that ruling a few months ago. In the US, Thaler/DABUS keeps on losing as well. Last fall, he lost in court as he tried to overturn the USPTO ruling, and then earlier this year, the US Copyright Office also rejected his copyright attempt (something it has done a few times before). In June, he sued the Copyright Office over this, which seems like a long shot.

And now, he’s also lost his appeal of the ruling in the patent case. CAFC, the Court of Appeals for the Federal Circuit — the appeals court that handles all patent appeals — has rejected Thaler’s request just like basically every other patent and copyright office, and nearly all courts.

If you have the time, the August 15, 2022 posting is an interesting read.

Consciousness and ethical AI

Just to make things more fraught, an engineer at Google has claimed that one of their AI chatbots has consciousness. From a June 16, 2022 article (in Canada’s National Post [previewed on epaper]) by Patrick McGee,

Google has ignited a social media firestorm on the the nature of consciousness after placing an engineer on paid leave with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday [June 11, 2022] profile in the Washington Post characterized Lemoine as “the Google engineer who thinks “the company’s AI has come to life.”

This is not the first time that Google has run into a problem with ethics and AI. Famously, Timnit Gebru who co-led (with Margaret Mitchell) Google’s ethics and AI unit departed in 2020. Gebru said (and maintains to this day) she was fired. They said she was ?, they never did make a final statement although after an investigation Gebru did receive an apology. You read more about Gebru and the issues she brought to light in her Wikipedia entry. Coincidentally (or not), Margaret Mitchell was terminated/fired in February 2021 from Google after criticizing the company for Gebru’s ‘firing’. See a February 19, 2021 article by Megan Rose Dickey for TechCrunch for details about what the company has admitted is a firing or Margaret Mitchell’s termination from the company.

Getting back intellectual property and AI.

What about copyright?

There are no mentions of copyright in the earliest material I have here about the ‘creative’ arts and artificial intelligence is this, “Writing and AI or is a robot writing this blog?” posted July 16, 2014. More recently, there’s “Beer and wine reviews, the American Chemical Society’s (ACS) AI editors, and the Turing Test” posted May 20, 2022. The type of writing featured is not literary or typically considered creative writing.

On the more creative front, there’s “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” posted on December 3, 2021. The literary/creative portion of the post can be found under the ‘AI and creativity’ subhead approximately 30% of the way down and where I mention Douglas Coupland. Again, there’s no mention of copyright.

It’s with the visual arts that copyright gets mentioned. The first one I can find here is “Robot artists—should they get copyright protection” posted on July 10, 2017.

Fun fact: Andres Guadamuz who was mentioned in my posting took to his own blog where he gave my blog a shout out while implying that I wasn’t thoughtful. The gist of his August 8, 2017 posting was that he was misunderstood by many people, which led to the title for his post, “Should academics try to engage the public?” Thankfully, he soldiers on trying to educate us with his TechnoLama blog.

Lastly, there’s this August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery” where you can scroll down to the ‘What about intellectual property?’ subhead about 80% of the way.

You look like a thing …

i am recommending a book for anyone who’d like to learn a little more about how artificial intelligence (AI) works, “You look like a thing and I love you; How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” by Janelle Shane (2019).

It does not require an understanding of programming/coding/algorithms/etc.; Shane makes the subject as accessible as possible and gives you insight into why the term ‘artificial stupidity’ is more applicable than you might think. You can find Shane’s website here and you can find her 10 minute TED talk here.

Windows and roofs ‘self-adapt’ to heating and cooling conditions

I have two items about thermochromic coatings. It’s a little confusing since the American Association for the Advancement of Science (AAAS), which publishes the journal featuring both papers has issued a news release that seemingly refers to both papers as a single piece of research.

Onto, the press/new releases from the research institutions to be followed by the AAAS news release.

Nanyang Technological University (NTU) does windows

A December 16, 2021 news item on Nanowerk announced work on energy-saving glass,

An international research team led by scientists from Nanyang Technological University, Singapore (NTU Singapore) has developed a material that, when coated on a glass window panel, can effectively self-adapt to heat or cool rooms across different climate zones in the world, helping to cut energy usage.

Developed by NTU researchers and reported in the journal Science (“Scalable thermochromic smart windows with passive radiative cooling regulation”), the first-of-its-kind glass automatically responds to changing temperatures by switching between heating and cooling.

The self-adaptive glass is developed using layers of vanadium dioxide nanoparticles composite, Poly(methyl methacrylate) (PMMA), and low-emissivity coating to form a unique structure which could modulate heating and cooling simultaneously.

A December 17, 2021 NTU press release (PDF), also on EurekAlert but published December 16, 2021, which originated the news item, delves further into the research (Note: A link has been removed),

The newly developed glass, which has no electrical components, works by exploiting the spectrums of light responsible for heating and cooling.

During summer, the glass suppresses solar heating (near infrared light), while boosting radiative cooling (long-wave infrared) – a natural phenomenon where heat emits through surfaces towards the cold universe – to cool the room. In the winter, it does the opposite to warm up the room.

In lab tests using an infrared camera to visualise results, the glass allowed a controlled amount of heat to emit in various conditions (room temperature – above 70°C), proving its ability to react dynamically to changing weather conditions.

New glass regulates both heating and cooling

Windows are one of the key components in a building’s design, but they are also the least energy-efficient and most complicated part. In the United States alone, window-associated energy consumption (heating and cooling) in buildings accounts for approximately four per cent of their total primary energy usage each year according to an estimation based on data available from the Department of Energy in US.[1]

While scientists elsewhere have developed sustainable innovations to ease this energy demand – such as using low emissivity coatings to prevent heat transfer and electrochromic glass that regulate solar transmission from entering the room by becoming tinted – none of the solutions have been able to modulate both heating and cooling at the same time, until now.

The principal investigator of the study, Dr Long Yi of the NTU School of Materials Science and Engineering (MSE) said, “Most energy-saving windows today tackle the part of solar heat gain caused by visible and near infrared sunlight. However, researchers often overlook the radiative cooling in the long wavelength infrared. While innovations focusing on radiative cooling have been used on walls and roofs, this function becomes undesirable during winter. Our team has demonstrated for the first time a glass that can respond favourably to both wavelengths, meaning that it can continuously self-tune to react to a changing temperature across all seasons.”

As a result of these features, the NTU research team believes their innovation offers a convenient way to conserve energy in buildings since it does not rely on any moving components, electrical mechanisms, or blocking views, to function.

To improve the performance of windows, the simultaneous modulation of both solar transmission and radiative cooling are crucial, said co-authors Professor Gang Tan from The University of Wyoming, USA, and Professor Ronggui Yang from the Huazhong University of Science and Technology, Wuhan, China, who led the building energy saving simulation.

“This innovation fills the missing gap between traditional smart windows and radiative cooling by paving a new research direction to minimise energy consumption,” said Prof Gang Tan.

The study is an example of groundbreaking research that supports the NTU 2025 strategic plan, which seeks to address humanity’s grand challenges on sustainability, and accelerate the translation of research discoveries into innovations that mitigate human impact on the environment.

Innovation useful for a wide range of climate types

As a proof of concept, the scientists tested the energy-saving performance of their invention using simulations of climate data covering all populated parts of the globe (seven climate zones).

The team found the glass they developed showed energy savings in both warm and cool seasons, with an overall energy saving performance of up to 9.5%, or ~330,000 kWh per year (estimated energy required to power 60 household in Singapore for a year) less than commercially available low emissivity glass in a simulated medium sized office building.

First author of the study Wang Shancheng, who is Research Fellow and former PhD student of Dr Long Yi, said, “The results prove the viability of applying our glass in all types of climates as it is able to help cut energy use regardless of hot and cold seasonal temperature fluctuations. This sets our invention apart from current energy-saving windows which tend to find limited use in regions with less seasonal variations.”

Moreover, the heating and cooling performance of their glass can be customised to suit the needs of the market and region for which it is intended.

“We can do so by simply adjusting the structure and composition of special nanocomposite coating layered onto the glass panel, allowing our innovation to be potentially used across a wide range of heat regulating applications, and not limited to windows,” Dr Long Yi said.

Providing an independent view, Professor Liangbing Hu, Herbert Rabin Distinguished Professor, Director of the Center for Materials Innovation at the University of Maryland, USA, said, “Long and co-workers made the original development of smart windows that can regulate the near-infrared sunlight and the long-wave infrared heat. The use of this smart window could be highly important for building energy-saving and decarbonization.”  

A Singapore patent has been filed for the innovation. As the next steps, the research team is aiming to achieve even higher energy-saving performance by working on the design of their nanocomposite coating.

The international research team also includes scientists from Nanjing Tech University, China. The study is supported by the Singapore-HUJ Alliance for Research and Enterprise (SHARE), under the Campus for Research Excellence and Technological Enterprise (CREATE) programme, Minster of Education Research Fund Tier 1, and the Sino-Singapore International Joint Research Institute.

Here’s a link to and a citation for the paper,

Scalable thermochromic smart windows with passive radiative cooling regulation by Shancheng Wang, Tengyao Jiang, Yun Meng, Ronggui Yang, Gang Tan, and Yi Long. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1501-1504 • DOI: 10.1126/science.abg0291

This paper is behind a paywall.

Lawrence Berkeley National Laboratory (Berkeley Lab; LBNL) does roofs

A December 16, 2021 Lawrence Berkeley National Laboratory news release (also on EurekAlert) announces an energy-saving coating for roofs (Note: Links have been removed),

Scientists have developed an all-season smart-roof coating that keeps homes warm during the winter and cool during the summer without consuming natural gas or electricity. Research findings reported in the journal Science point to a groundbreaking technology that outperforms commercial cool-roof systems in energy savings.

“Our all-season roof coating automatically switches from keeping you cool to warm, depending on outdoor air temperature. This is energy-free, emission-free air conditioning and heating, all in one device,” said Junqiao Wu, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of materials science and engineering who led the study.

Today’s cool roof systems, such as reflective coatings, membranes, shingles, or tiles, have light-colored or darker “cool-colored” surfaces that cool homes by reflecting sunlight. These systems also emit some of the absorbed solar heat as thermal-infrared radiation; in this natural process known as radiative cooling, thermal-infrared light is radiated away from the surface.

The problem with many cool-roof systems currently on the market is that they continue to radiate heat in the winter, which drives up heating costs, Wu explained.

“Our new material – called a temperature-adaptive radiative coating or TARC – can enable energy savings by automatically turning off the radiative cooling in the winter, overcoming the problem of overcooling,” he said.

A roof for all seasons

Metals are typically good conductors of electricity and heat. In 2017, Wu and his research team discovered that electrons in vanadium dioxide behave like a metal to electricity but an insulator to heat – in other words, they conduct electricity well without conducting much heat. “This behavior contrasts with most other metals where electrons conduct heat and electricity proportionally,” Wu explained.

Vanadium dioxide below about 67 degrees Celsius (153 degrees Fahrenheit) is also transparent to (and hence not absorptive of) thermal-infrared light. But once vanadium dioxide reaches 67 degrees Celsius, it switches to a metal state, becoming absorptive of thermal-infrared light. This ability to switch from one phase to another – in this case, from an insulator to a metal – is characteristic of what’s known as a phase-change material.

To see how vanadium dioxide would perform in a roof system, Wu and his team engineered a 2-centimeter-by-2-centimeter TARC thin-film device.

TARC “looks like Scotch tape, and can be affixed to a solid surface like a rooftop,” Wu said.

In a key experiment, co-lead author Kechao Tang set up a rooftop experiment at Wu’s East Bay home last summer to demonstrate the technology’s viability in a real-world environment.

A wireless measurement device set up on Wu’s balcony continuously recorded responses to changes in direct sunlight and outdoor temperature from a TARC sample, a commercial dark roof sample, and a commercial white roof sample over multiple days.

How TARC outperforms in energy savings

The researchers then used data from the experiment to simulate how TARC would perform year-round in cities representing 15 different climate zones across the continental U.S.

Wu enlisted Ronnen Levinson, a co-author on the study who is a staff scientist and leader of the Heat Island Group in Berkeley Lab’s Energy Technologies Area, to help them refine their model of roof surface temperature. Levinson developed a method to estimate TARC energy savings from a set of more than 100,000 building energy simulations that the Heat Island Group previously performed to evaluate the benefits of cool roofs and cool walls across the United States.

Finnegan Reichertz, a 12th grade student at the East Bay Innovation Academy in Oakland who worked remotely as a summer intern for Wu last year, helped to simulate how TARC and the other roof materials would perform at specific times and on specific days throughout the year for each of the 15 cities or climate zones the researchers studied for the paper.

The researchers found that TARC outperforms existing roof coatings for energy saving in 12 of the 15 climate zones, particularly in regions with wide temperature variations between day and night, such as the San Francisco Bay Area, or between winter and summer, such as New York City.

“With TARC installed, the average household in the U.S. could save up to 10% electricity,” said Tang, who was a postdoctoral researcher in the Wu lab at the time of the study. He is now an assistant professor at Peking University in Beijing, China.

Standard cool roofs have high solar reflectance and high thermal emittance (the ability to release heat by emitting thermal-infrared radiation) even in cool weather.

According to the researchers’ measurements, TARC reflects around 75% of sunlight year-round, but its thermal emittance is high (about 90%) when the ambient temperature is warm (above 25 degrees Celsius or 77 degrees Fahrenheit), promoting heat loss to the sky. In cooler weather, TARC’s thermal emittance automatically switches to low, helping to retain heat from solar absorption and indoor heating, Levinson said.

Findings from infrared spectroscopy experiments using advanced tools at Berkeley Lab’s Molecular Foundry validated the simulations.

“Simple physics predicted TARC would work, but we were surprised it would work so well,” said Wu. “We originally thought the switch from warming to cooling wouldn’t be so dramatic. Our simulations, outdoor experiments, and lab experiments proved otherwise – it’s really exciting.”

The researchers plan to develop TARC prototypes on a larger scale to further test its performance as a practical roof coating. Wu said that TARC may also have potential as a thermally protective coating to prolong battery life in smartphones and laptops, and shield satellites and cars from extremely high or low temperatures. It could also be used to make temperature-regulating fabric for tents, greenhouse coverings, and even hats and jackets.

Co-lead authors on the study were Kaichen Dong and Jiachen Li.

The Molecular Foundry is a nanoscience user facility at Berkeley Lab.

This work was primarily supported by the DOE Office of Science and a Bakar Fellowship.

The technology is available for licensing and collaboration. If interested, please contact Berkeley Lab’s Intellectual Property Office, ipo@lbl.gov.

Here’s a link to and a citation for the paper,

Temperature-adaptive radiative coating for all-season household thermal regulation by Kechao Tang, Kaichen Dong, Jiachen Li, Madeleine P. Gordon, Finnegan G. Reichertz, Hyungjin Kim, Yoonsoo Rho, Qingjun Wang, Chang-Yu Lin, Costas P. Grigoropoulos, Ali Javey, Jeffrey J. Urban, Jie Yao, Ronnen Levinson, Junqiao Wu. Science • 16 Dec 2021 • Vol 374, Issue 6574 • pp. 1504-1509 • DOI: 10.1126/science.abf7136

This paper is behind a paywall.

An interesting news release from the AAAS

While it’s a little confusing as it cites only the ‘window’ research from NTU, the body of this news release offers some additional information about the usefulness of thermochromic materials and seemingly refers to both papers, from a December 16, 2021 AAAS news release,

Temperature-adaptive passive radiative cooling for roofs and windows

When it’s cold out, window glass and roof coatings that use passive radiative cooling to keep buildings cool can be designed to passively turn off radiative cooling to avoid heat loss, two new studies show.  Their proof-of-concept analyses demonstrate that passive radiative cooling can be expanded to warm and cold climate applications and regions, potentially providing all-season energy savings worldwide. Buildings consume roughly 40% of global energy, a large proportion of which is used to keep them cool in warmer climates. However, most temperature regulation systems commonly employed are not very energy efficient and require external power or resources. In contrast, passive radiative cooling technologies, which use outer space as a near-limitless natural heat sink, have been extensively examined as a means of energy-efficient cooling for buildings. This technology uses materials designed to selectively emit narrow-band radiation through the infrared atmospheric window to disperse heat energy into the coldness of space. However, while this approach has proven effective in cooling buildings to below ambient temperatures, it is only helpful during the warmer months or in regions that are perpetually hot. Furthermore, the inability to “turn off” passive cooling in cooler climes or in regions with large seasonal temperature variations means that continuous cooling during colder periods would exacerbate the energy costs of heating. In two different studies, by Shancheng Wang and colleagues and Kechao Tang and colleagues, researchers approach passive radiative cooling from an all-season perspective and present a new, scalable temperature-adaptive radiative technology that passively turns off radiative cooling at lower temperatures. Wang et al. and Tang et al. achieve this using a tungsten-doped vanadium dioxide and show how it can be applied to create both window glass and a flexible roof coating, respectively. Model simulations of the self-adapting materials suggest they could provide year-round energy savings across most climate zones, especially those with substantial seasonal temperature variations. 

I wish them all good luck with getting these materials to market.

Combine a nonwoven nanotextile and unique compounds to treat skin infections

A September 30, 2021 news item on Nanowerk a new material for treating skin infections (Note: A link has been removed),

Researchers at the Institute of Organic Chemistry and Biochemistry of the CAS (IOCB Prague) and the Technical University of Liberec in collaboration with researchers from the Institute of Microbiology of the CAS, the Department of Burns Medicine of the Third Faculty of Medicine at Charles University (Czech Republic), and P. J. Šafárik University in Košice (Slovakia) have developed a novel antibacterial material combining nonwoven nanotextile and unique compounds with antibacterial properties (Scientific Reports, “Novel lipophosphonoxin-loaded polycaprolactone electrospun nanofiber dressing reduces Staphylococcus aureus induced wound infection in mice”).

A September 30, 2021 Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences (IOCB Prague) press release (also on EurekAlert), which originated the news item, describes the work in more detail,

Because the number of bacterial strains resistant to common antibiotics is steadily increasing, there is a growing need for new substances with antibacterial properties. A very promising class of substances are the so-called lipophosphonoxins (LPPO) developed by the team of Dominik Rejman of IOCB Prague in collaboration with Libor Krásný of the Institute of Microbiology of the CAS.

“Lipophosphonoxins hold considerable promise as a new generation of antibiotics. They don’t have to penetrate the bacteria but instead act on the surface, where they disrupt the bacterial cell membrane. That makes them very efficient at destroying bacteria,” says Rejman.

“A big advantage of LPPO is the limited ability of bacteria to develop resistance to them. In an experiment lasting several weeks, we failed to find a bacteria resistant to these substances, while resistance to well-known antibiotics developed relatively easily,” explains Krásný.

The potential of LPPO is especially evident in situations requiring immediate targeted intervention, such as skin infections. Here, however, the substances must be combined with a suitable material that ensures their topical efficacy without the need to enter the circulatory system. This reduces the burden to the body and facilitates use.

One such suitable material is a polymer nanofiber developed by the team of David Lukáš of the Faculty of Science, Humanities and Education at the Technical University of Liberec. The researchers combined it with LPPO to prepare a new type of dressing material for bacteria-infected skin wounds. The material’s main benefit is that the antibacterial LPPO are released from it gradually and in relation to the presence and extent of infection.

“The research and development of the material NANO-LPPO is a continuation of the work carried out in a clinical trial on the NANOTARDIS medical device, which we recently successfully completed in collaboration with Regional Hospital Liberec, University Hospital Královské Vinohrady, and Bulovka University Hospital. With its morphological and physical-chemical properties, the device promotes the healing of clean acute wounds,” says Lukáš. “This collaboration with colleagues from IOCB Prague is really advancing the possibilities for use of functionalized nanofiber materials in the areas of chronic and infected wounds.”

“Enzymes decompose the nanomaterial into harmless molecules. The LPPO are an integral component of the material and are primarily released from it during this decomposition. Moreover, the process is greatly accelerated by the presence of bacteria, which produce lytic enzymes. This means that the more bacteria there are in the wound, the faster the material decomposes, which in turn releases more of the active substances into the affected site to promote healing and regeneration of soft tissues,” says Rejman in describing the action of the material.  

“Our experiments on mice confirmed the ability of NANO-LPPO to prevent infection in the wound and thus accelerate healing and regeneration. There was practically no spread of infection where we used the material. If clinical trials go well, this could be a breakthrough in the treatment of burns and other serious injuries where infection poses an acute threat and complication to treatment,” explains wound care specialist Peter Gál of the Department of Burns Medicine at Charles University’s Third Faculty of Medicine, the Faculty of Medicine at P. J. Šafárik University in Košice, and the East Slovak Institute for Cardiovascular Diseases.  

In terms of applications, NANO-LPPO is an interesting material for manufacturers of medicinal products and medical devices. Its commercialization is being coordinated through a collaborative effort between IOCB TECH, a subsidiary of IOCB Prague, and Charles University Innovations Prague, a subsidiary of Charles University, both of which were created for the purpose of transferring results of academic research to practice. The companies are currently seeking a suitable commercial partner.

Here’s a link to and a citation for the paper,

Novel lipophosphonoxin-loaded polycaprolactone electrospun nanofiber dressing reduces Staphylococcus aureus induced wound infection in mice by Duy Dinh Do Pham, Věra Jenčová, Miriam Kaňuchová, Jan Bayram, Ivana Grossová, Hubert Šuca, Lukáš Urban, Kristýna Havlíčková, Vít Novotný, Petr Mikeš, Viktor Mojr, Nikifor Asatiani, Eva Kuželová Košťáková, Martina Maixnerová, Alena Vlková, Dragana Vítovská, Hana Šanderová, Alexandr Nemec, Libor Krásný, Robert Zajíček, David Lukáš, Dominik Rejman & Peter Gál. Scientific Reports volume 11, Article number: 17688 (2021) DOI: https://doi.org/10.1038/s41598-021-96980-7 Published: 03 September 2021

This paper is open access.

Soft, inflatable, and potentially low-cost neuroprosthetic hand?

An August 16, 2021 news item on ScienceDaily describes a new type of neuroprosthetic,

For the more than 5 million people in the world who have undergone an upper-limb amputation, prosthetics have come a long way. Beyond traditional mannequin-like appendages, there is a growing number of commercial neuroprosthetics — highly articulated bionic limbs, engineered to sense a user’s residual muscle signals and robotically mimic their intended motions.

But this high-tech dexterity comes at a price. Neuroprosthetics can cost tens of thousands of dollars and are built around metal skeletons, with electrical motors that can be heavy and rigid.

Now engineers at MIT [Massachusetts Institute of Technology] and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities, such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well as — and in some cases better than — those with more rigid neuroprosthetics.

Here’s a video demonstration,

An August 16, 2021 MIT news news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers found the prosthetic, designed with a system for tactile feedback, restored some primitive sensation in a volunteer’s residual limb. The new design is also surprisingly durable, quickly recovering after being struck with a hammer or run over with a car.

The smart hand is soft and elastic, and weighs about half a pound. Its components total around $500 — a fraction of the weight and material cost associated with more rigid smart limbs.

“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” says Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering at MIT. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

Zhao and his colleagues have published their work today [August 16, 2021] in Nature Biomedical Engineering. Co-authors include MIT postdoc Shaoting Lin, along with Guoying Gu, Xiangyang Zhu, and collaborators at Shanghai Jiao Tong University in China.

Big Hero hand

The team’s pliable new design bears an uncanny resemblance to a certain inflatable robot in the animated film “Big Hero 6.” Like the squishy android, the team’s artificial hand is made from soft, stretchy material — in this case, the commercial elastomer EcoFlex. The prosthetic comprises five balloon-like fingers, each embedded with segments of fiber, similar to articulated bones in actual fingers. The bendy digits are connected to a 3-D-printed “palm,” shaped like a human hand.

Rather than controlling each finger using mounted electrical motors, as most neuroprosthetics do, the researchers used a simple pneumatic system to precisely inflate fingers and bend them in specific positions. This system, including a small pump and valves, can be worn at the waist, significantly reducing the prosthetic’s weight.

Lin developed a computer model to relate a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers, in positions that mimic five common grasps, including pinching two and three fingers together, making a balled-up fist, and cupping the palm.

The pneumatic system receives signals from EMG sensors — electromyography sensors that measure electrical signals generated by motor neurons to control muscles. The sensors are fitted at the prosthetic’s opening, where it attaches to a user’s limb. In this arrangement, the sensors can pick up signals from a residual limb, such as when an amputee imagines making a fist.

The team then used an existing algorithm that “decodes” muscle signals and relates them to common grasp types. They used this algorithm to program the controller for their pneumatic system. When an amputee imagines, for instance, holding a wine glass, the sensors pick up the residual muscle signals, which the controller then translates into corresponding pressures. The pump then applies those pressures to inflate each finger and produce the amputee’s intended grasp.

Going a step further in their design, the researchers looked to enable tactile feedback — a feature that is not incorporated in most commercial neuroprosthetics. To do this, they stitched to each fingertip a pressure sensor, which when touched or squeezed produces an electrical signal proportional to the sensed pressure. Each sensor is wired to a specific location on an amputee’s residual limb, so the user can “feel” when the prosthetic’s thumb is pressed, for example, versus the forefinger.

Good grip

To test the inflatable hand, the researchers enlisted two volunteers, each with upper-limb amputations. Once outfitted with the neuroprosthetic, the volunteers learned to use it by repeatedly contracting the muscles in their arm while imagining making five common grasps.

After completing this 15-minute training, the volunteers were asked to perform a number of standardized tests to demonstrate manual strength and dexterity. These tasks included stacking checkers, turning pages, writing with a pen, lifting heavy balls, and picking up fragile objects like strawberries and bread. They repeated the same tests using a more rigid, commercially available bionic hand and found that the inflatable prosthetic was as good, or even better, at most tasks, compared to its rigid counterpart.

One volunteer was also able to intuitively use the soft prosthetic in daily activities, for instance to eat food like crackers, cake, and apples, and to handle objects and tools, such as laptops, bottles, hammers, and pliers. This volunteer could also safely manipulate the squishy prosthetic, for instance to shake someone’s hand, touch a flower, and pet a cat.

In a particularly exciting exercise, the researchers blindfolded the volunteer and found he could discern which prosthetic finger they poked and brushed. He was also able to “feel” bottles of different sizes that were placed in the prosthetic hand, and lifted them in response. The team sees these experiments as a promising sign that amputees can regain a form of sensation and real-time control with the inflatable hand.

The team has filed a patent on the design, through MIT, and is working to improve its sensing and range of motion.

“We now have four grasp types. There can be more,” Zhao says. “This design can be improved, with better decoding technology, higher-density myoelectric arrays, and a more compact pump that could be worn on the wrist. We also want to customize the design for mass production, so we can translate soft robotic technology to benefit society.”

Here’s a link to and a citation for the paper,

A soft neuroprosthetic hand providing simultaneous myoelectric control and tactile feedback by Guoying Gu, Ningbin Zhang, Haipeng Xu, Shaoting Lin, Yang Yu, Guohong Chai, Lisen Ge, Houle Yang, Qiwen Shao, Xinjun Sheng, Xiangyang Zhu, Xuanhe Zhao. Nature Biomedical Engineering (2021) DOI: https://doi.org/10.1038/s41551-021-00767-0 Published: 16 August 2021

This paper is behind a paywall.

The coolest paint

It’s the ‘est’ of it all. The coolest, the whitest, the blackest … Scientists and artists are both pursuing the ‘est’. (More about the pursuit later in this posting.)

In this case, scientists have developed the coolest, whitest paint yet. From an April 16, 2021 news item on Nanowerk,

In an effort to curb global warming, Purdue University engineers have created the whitest paint yet. Coating buildings with this paint may one day cool them off enough to reduce the need for air conditioning, the researchers say.

In October [2020], the team created an ultra-white paint that pushed limits on how white paint can be. Now they’ve outdone that. The newer paint not only is whiter but also can keep surfaces cooler than the formulation that the researchers had previously demonstrated.

“If you were to use this paint to cover a roof area of about 1,000 square feet, we estimate that you could get a cooling power of 10 kilowatts. That’s more powerful than the central air conditioners used by most houses,” said Xiulin Ruan, a Purdue professor of mechanical engineering.

Caption: Xiulin Ruan, a Purdue University professor of mechanical engineering, holds up his lab’s sample of the whitest paint on record. Credit: Purdue University/Jared Pike

This is nicely done. Researcher Xiulin Ruan is standing close to a structure that could be said to resemble the sun while in shirtsleeves and sunglasses and holding up a sample of his whitest paint in April (not usually a warm month in Indiana).

An April 15, 2021 Purdue University news release (also on EurkeAlert), which originated the news item, provides more detail about the work and hints about its commercial applications both civilian and military,

The researchers believe that this white may be the closest equivalent of the blackest black, “Vantablack,” [emphasis mine; see comments later in this post] which absorbs up to 99.9% of visible light. The new whitest paint formulation reflects up to 98.1% of sunlight – compared with the 95.5% of sunlight reflected by the researchers’ previous ultra-white paint – and sends infrared heat away from a surface at the same time.

Typical commercial white paint gets warmer rather than cooler. Paints on the market that are designed to reject heat reflect only 80%-90% of sunlight and can’t make surfaces cooler than their surroundings.

The team’s research paper showing how the paint works publishes Thursday (April 15 [2021]) as the cover of the journal ACS Applied Materials & Interfaces.

What makes the whitest paint so white

Two features give the paint its extreme whiteness. One is the paint’s very high concentration of a chemical compound called barium sulfate [emphasis mine] which is also used to make photo paper and cosmetics white.

“We looked at various commercial products, basically anything that’s white,” said Xiangyu Li, a postdoctoral researcher at the Massachusetts Institute of Technology who worked on this project as a Purdue Ph.D. student in Ruan’s lab. “We found that using barium sulfate, you can theoretically make things really, really reflective, which means that they’re really, really white.”

The second feature is that the barium sulfate particles are all different sizes in the paint. How much each particle scatters light depends on its size, so a wider range of particle sizes allows the paint to scatter more of the light spectrum from the sun.

“A high concentration of particles that are also different sizes gives the paint the broadest spectral scattering, which contributes to the highest reflectance,” said Joseph Peoples, a Purdue Ph.D. student in mechanical engineering.

There is a little bit of room to make the paint whiter, but not much without compromising the paint.”Although a higher particle concentration is better for making something white, you can’t increase the concentration too much. The higher the concentration, the easier it is for the paint to break or peel off,” Li said.

How the whitest paint is also the coolest

The paint’s whiteness also means that the paint is the coolest on record. Using high-accuracy temperature reading equipment called thermocouples, the researchers demonstrated outdoors that the paint can keep surfaces 19 degrees Fahrenheit cooler than their ambient surroundings at night. It can also cool surfaces 8 degrees Fahrenheit below their surroundings under strong sunlight during noon hours.

The paint’s solar reflectance is so effective, it even worked in the middle of winter. During an outdoor test with an ambient temperature of 43 degrees Fahrenheit, the paint still managed to lower the sample temperature by 18 degrees Fahrenheit.

This white paint is the result of six years of research building on attempts going back to the 1970s to develop radiative cooling paint as a feasible alternative to traditional air conditioners.

Ruan’s lab had considered over 100 different materials, narrowed them down to 10 and tested about 50 different formulations for each material. Their previous whitest paint was a formulation made of calcium carbonate, an earth-abundant compound commonly found in rocks and seashells.

The researchers showed in their study that like commercial paint, their barium sulfate-based paint can potentially handle outdoor conditions. The technique that the researchers used to create the paint also is compatible with the commercial paint fabrication process.

Patent applications for this paint formulation have been filed through the Purdue Research Foundation Office of Technology Commercialization. This research was supported by the Cooling Technologies Research Center at Purdue University and the Air Force Office of Scientific Research [emphasis mine] through the Defense University Research Instrumentation Program (Grant No.427 FA9550-17-1-0368). The research was performed at Purdue’s FLEX Lab and Ray W. Herrick Laboratories and the Birck Nanotechnology Center of Purdue’s Discovery Park.

Here’s a link to and a citation for the paper,

Ultrawhite BaSO4 Paints and Films for Remarkable Daytime Subambient Radiative Cooling by Xiangyu Li, Joseph Peoples, Peiyan Yao, and Xiulin Ruan. ACS Appl. Mater. Interfaces 2021, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsami.1c02368 Publication Date:April 15, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

Vantablack and the ongoing ‘est’ of blackest

Vantablack’s 99.9% light absorption no longer qualifies it for the ‘blackest black’. A newer standard for the ‘blackest black’ was set by the US National Institute of Standards and Technology at 99.99% light absorption with its N.I.S.T. ultra-black in 2019, although that too seems to have been bested.

I have three postings covering the Vantablack and blackest black story,

The third posting (December 2019) provides a brief summary of the story along with what was the latest from the US National Institute of Standards and Technology. There’s also a little bit about the ‘The Redemption of Vanity’ an art piece demonstrating the blackest black material from the Massachusetts Institute of Technology, which they state has 99.995% (at least) absorption of light.

From a science perspective, the blackest black would be useful for space exploration.

I am surprised there doesn’t seem to have been an artistic rush to work with the whitest white. That impression may be due to the fact that the feuds get more attention than quiet work.

Dark side to the whitest white?

Andrew Parnell, research fellow in physics and astronomy at the University of Sheffield (UK), mentions a downside to obtaining the material needed to produce this cooling white paint in a June 10, 2021 essay on The Conversation (h/t Fast Company), Note: Links have been removed,

… this whiter-than-white paint has a darker side. The energy required to dig up raw barite ore to produce and process the barium sulphite that makes up nearly 60% of the paint means it has a huge carbon footprint. And using the paint widely would mean a dramatic increase in the mining of barium.

Parnell ends his essay with this (Note: Links have been removed),

Barium sulphite-based paint is just one way to improve the reflectivity of buildings. I’ve spent the last few years researching the colour white in the natural world, from white surfaces to white animals. Animal hairs, feathers and butterfly wings provide different examples of how nature regulates temperature within a structure. Mimicking these natural techniques could help to keep our cities cooler with less cost to the environment.

The wings of one intensely white beetle species called Lepidiota stigma appear a strikingly bright white thanks to nanostructures in their scales, which are very good at scattering incoming light. This natural light-scattering property can be used to design even better paints: for example, by using recycled plastic to create white paint containing similar nanostructures with a far lower carbon footprint. When it comes to taking inspiration from nature, the sky’s the limit.