Monthly Archives: August 2019

A solution to the problem of measuring nanoparticles

As you might expect from the US National Institute of Standards and Technology (NIST) this research concerns techniques for measurements. From an August 15, 2019 news item on Nanowerk (Note: Links have been removed),

Tiny nanoparticles play a gargantuan role in modern life, even if most consumers are unaware of their presence. They provide essential ingredients in sunscreen lotions, prevent athlete’s foot fungus in socks, and fight microbes on bandages. They enhance the colors of popular candies and keep the powdered sugar on doughnuts powdery. They are even used in advanced drugs that target specific types of cells in cancer treatments.

When chemists analyze a sample, however, it is challenging to measure the sizes and quantities of these particles — which are often 100,000 times smaller than the thickness of a piece of paper. Technology offers many options for assessing nanoparticles, but experts have not reached a consensus on which technique is best.

In a new paper from the National Institute of Standards and Technology (NIST) and collaborating institutions, researchers have concluded that measuring the range of sizes in nanoparticles — instead of just the average particle size — is optimal for most applications.

An August 14, 2019 NIST news release (also received via email and on EurkAlert), which originated the news item, delves further into the research,

“It seems like a simple choice,” said NIST’s Elijah Petersen, the lead author of the paper, which was published today in Environmental Science: Nano. “But it can have a big impact on the outcome of your assessment.”

As with many measurement questions, precision is key. Exposure to a certain amount of some nanoparticles could have adverse effects. Pharmaceutical researchers often need exactitude to maximize a drug’s efficacy. And environmental scientists need to know, for example, how many nanoparticles of gold, silver or titanium could potentially cause a risk to organisms in soil or water.

Using more nanoparticles than needed in a product because of inconsistent measurements could also waste money for manufacturers.

Although they might sound ultramodern, nanoparticles are neither new nor based solely on high-tech manufacturing processes. A nanoparticle is really just a submicroscopic particle that measures less than 100 nanometers on at least one of its dimensions. It would be possible to place hundreds of thousands of them onto the head of a pin. They are exciting to researchers because many materials act differently at the nanometer scale than they do at larger scales, and nanoparticles can be made to do lots of useful things.

Nanoparticles have been in use since the days of ancient Mesopotamia [emphasis mine], when ceramic artists used extremely small bits of metal to decorate vases and other vessels. In fourth-century Rome, glass artisans ground metal into tiny particles to change the color of their wares under different lighting. These techniques were forgotten for a while but rediscovered in the 1600s by resourceful manufacturers for glassmaking [emphasis mine] again. Then, in the 1850s, scientist Michael Faraday extensively researched ways to use various kinds of wash mixes to change the performance of gold particles.

Modern nanoparticle research advanced quickly in the mid-20th century due to technological innovations in optics. Being able to see the individual particles and study their behavior expanded the possibilities for experimentation. The largest advances came, however, after experimental nanotechnology took off in the 1990s. Suddenly, the behavior of single particles of gold and many other substances could be closely examined and manipulated. Discoveries about the ways that small amounts of a substance would reflect light, absorb light, or change in behavior were numerous, leading to the incorporation of nanoparticles into many more products

Debates have since followed about their measurement. When assessing the response of cells or organisms to nanoparticles, some researchers prefer measuring particle number concentrations (sometimes called PNCs by scientists). Many find PNCs challenging since extra formulas must be employed when determining the final measurement. Others prefer measuring mass or surface area concentrations.

PNCs are often used for characterizing metals in chemistry. The situation for nanoparticles is inherently more complex, however, than it is for dissolved organic or inorganic substances because unlike dissolved chemicals, nanoparticles can come in a wide variety of sizes and sometimes stick together when added to testing materials.

“If you have a dissolved chemical, it’s always going to have the same molecular formula, by definition,” Petersen says. “Nanoparticles don’t just have a certain number of atoms, however. Some will be 9 nanometers, some will be 11, some might be 18, and some might be 3.”

The problem is that each of those particles may be fulfilling an important role. While a simple estimate of particle number is perfectly fine for some industrial applications, therapeutic applications require much more robust measurement. In the case of cancer therapies, for example, each particle, no matter how big or small, may be delivering a needed antidote. And just as with any other kind of dosage, nanoparticle dosage must be exact in order to be safe and effective.

Using the range of particle sizes to calculate the PNC will often be the most helpful in most cases, said Petersen. The size distribution doesn’t use a mean or an average but notes the complete distribution of sizes of particles so that formulas can be used to effectively discover how many particles are in a sample.

But no matter which approach is used, researchers need to make note of it in their papers, for the sake of comparability with other studies. “Don’t assume that different approaches will give you the same result,” he said.

Petersen adds that he and his colleagues were surprised by how much the coatings on nanoparticles could impact measurement. Some coatings, he noted, can have a positive electrical charge, causing clumping.

Petersen worked in collaboration with researchers from federal laboratories in Switzerland, and with scientists from 3M who have previously made many nanoparticle measurements for use in industrial settings. Researchers from Switzerland, like those in much of the rest of Europe, are keen to learn more about measuring nanoparticles because PNCs are required in many regulatory situations. There hasn’t been much information on which techniques are best or more likely to yield the most precise results across many applications.

“Until now we didn’t even know if we could find agreement among labs about particle number concentrations,” Petersen says. “They are complex. But now we are beginning to see it can be done.”

I love the reference to glassmaking and ancient Mesopotamia. Getting back to current times, here’s a link to and a citation for the paper,

Determining what really counts: modeling and measuring nanoparticle number concentrations by Elijah J. Petersen, Antonio R. Montoro Bustos, Blaza Toman, Monique E. Johnson, Mark Ellefson, George C. Caceres, Anna Lena Neuer, Qilin Chan, Jonathan W. Kemling, Brian Mader, Karen Murphy and Matthias Roesslein. Environmental Science: Nano. Published August 14, 2019. DOI: 10.1039/c9en00462a

This paper is behind a paywall.

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist works* with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

*’worsk’ corrected to ‘works’ on June 9, 2022

In depth report on European Commission’s nanotechnology definition

A February 13, 2019 news item on the (US) National Law Review blog announces a new report on nanomaterial definitions (Note: A link has been removed),

The European Commission’s (EC) Joint Research Center (JRC) published on February 13, 2019, a report entitled An overview of concepts and terms used in the European Commission’s definition of nanomaterial. … The report provides recommendations for a harmonized and coherent implementation of the nanomaterial definition in any specific regulatory context at the European Union (EU) and national level.

©2019 Bergeson & Campbell, P.C.

There’s a bit more detail about the report in a February 19, 2019 European Commission press release,

The JRC just released a report clarifying the key concepts and terms used in the European Commission’s nanomaterial definition.

This will support stakeholders for the correct implementation of legislation making reference to the definition.

Nanotechnology may well be one of the most fast-moving sectors of the last few years.
The number of products produced by nanotechnology or containing nanomaterials entering the market is increasing.

As the technology develops, nanomaterials are delivering benefits to many sectors, including: healthcare (in targeted drug delivery, regenerative medicine, and diagnostics), electronics, cosmetics, textiles, information technology and environmental protection.
As the name suggests, nanomaterials are very small – so small that they are invisible to the human eye.

In fact, nanomaterials contain particles smaller than 100 nanometres (100 millionths of a millimetre).

Nanomaterials have unique physical and chemical characteristics.
They can be used in consumer products to improve the products’ properties – for instance, to make something more resistant against breaking, stains or humidity.

Nanomaterials have undoubtedly enabled progress in many areas, but as with all innovation, we must ensure that the impact on human health and the environment are properly considered

The European Commission’s Recommendation on the definition of nanomaterials (2011/696/EU) provides a general basis for regulatory instruments in many areas.

This definition has been used in the EU regulations on biocidal products and medical devices, and the REACH regulation. It is also used in various national legislative texts.

However, in the context of a JRC survey, many respondents expressed difficulties with the implementation of the EC definition, in particular due to the fact that some of the key concepts and terms could be interpreted in different ways.

Therefore, the JRC just published the report “An overview of concepts and terms used in the European Commission’s definition of nanomaterial” which aims to provide a clarification of the key concepts and terms of the nanomaterial definition and discusses them in a regulatory context.

This will facilitate a common understanding and fosters a harmonised and coherent implementation of the nanomaterial definition in different regulatory context at EU and national level.

Not my favourite topic but definitions and their implementation are important whether I like it or not.

Mimicking the brain with an evolvable organic electrochemical transistor

Simone Fabiano and Jennifer Gerasimov have developed a learning transistor that mimics the way synapses function. Credit: Thor Balkhed

At a guess, this was originally a photograph which has been passed through some sort of programme to give it a paintinglike quality.

Moving onto the research, I don’t see any reference to memristors (another of the ‘devices’ that mimics the human brain) so perhaps this is an entirely different way to mimic human brains? A February 5, 2019 news item on ScienceDaily announces the work from Linkoping University (Sweden),

A new transistor based on organic materials has been developed by scientists at Linköping University. It has the ability to learn, and is equipped with both short-term and long-term memory. The work is a major step on the way to creating technology that mimics the human brain.

A February 5, 2019 Linkoping University press release (also on EurekAlert), which originated the news item, describes this ‘nonmemristor’ research into brainlike computing in more detail,

Until now, brains have been unique in being able to create connections where there were none before. In a scientific article in Advanced Science, researchers from Linköping University describe a transistor that can create a new connection between an input and an output. They have incorporated the transistor into an electronic circuit that learns how to link a certain stimulus with an output signal, in the same way that a dog learns that the sound of a food bowl being prepared means that dinner is on the way.

A normal transistor acts as a valve that amplifies or dampens the output signal, depending on the characteristics of the input signal. In the organic electrochemical transistor that the researchers have developed, the channel in the transistor consists of an electropolymerised conducting polymer. The channel can be formed, grown or shrunk, or completely eliminated during operation. It can also be trained to react to a certain stimulus, a certain input signal, such that the transistor channel becomes more conductive and the output signal larger.

“It is the first time that real time formation of new electronic components is shown in neuromorphic devices”, says Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.

The channel is grown by increasing the degree of polymerisation of the material in the transistor channel, thereby increasing the number of polymer chains that conduct the signal. Alternatively, the material may be overoxidised (by applying a high voltage) and the channel becomes inactive. Temporary changes of the conductivity can also be achieved by doping or dedoping the material.

“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other”, says Jennifer Gerasimov, postdoc in organic nanoelectronics and one of the authors of the article.

By changing the input signal, the strength of the transistor response can be modulated across a wide range, and connections can be created where none previously existed. This gives the transistor a behaviour that is comparable with that of the synapse, or the communication interface between two brain cells.

It is also a major step towards machine learning using organic electronics. Software-based artificial neural networks are currently used in machine learning to achieve what is known as “deep learning”. Software requires that the signals are transmitted between a huge number of nodes to simulate a single synapse, which takes considerable computing power and thus consumes considerable energy.

“We have developed hardware that does the same thing, using a single electronic component”, says Jennifer Gerasimov.

“Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells”, confirms Simone Fabiano.

The transistor channel has not been constructed using the most common polymer used in organic electronics, PEDOT, but instead using a polymer of a newly-developed monomer, ETE-S, produced by Roger Gabrielsson, who also works at the Laboratory of Organic Electronics and is one of the authors of the article. ETE-S has several unique properties that make it perfectly suited for this application – it forms sufficiently long polymer chains, is water-soluble while the polymer form is not, and it produces polymers with an intermediate level of doping. The polymer PETE-S is produced in its doped form with an intrinsic negative charge to balance the positive charge carriers (it is p-doped).

Here’s a link to and a citation for the paper,

An Evolvable Organic Electrochemical Transistor for Neuromorphic Applications by Jennifer Y. Gerasimov, Roger Gabrielsson, Robert Forchheimer, Eleni Stavrinidou, Daniel T. Simon, Magnus Berggren, Simone Fabiano. Advanced Science DOI: https://doi.org/10.1002/advs.201801339 First published: 04 February 2019

This paper is open access.

There’s one other image associated this work that I want to include here,

Synaptic transistor. Sketch of the organic electrochemical transistor, formed by electropolymerization of ETE‐S in the transistor channel. The electrolyte solution is confined by a PDMS well (not shown). In this work, we define the input at the gate as the presynaptic signal and the response at the drain as the postsynaptic terminal. During operation, the drain voltage is kept constant while the gate is pulsed. Synaptic weight is defined as the amplitude of the current response to a standard gate voltage characterization pulse of −0.1 V. Different memory functionalities are accessible by applying gate voltage Courtesy: Linkoping University Researchers

Vitamin C helps gold nanowires grow

This research gives new meaning to ‘Take your vitamin C’ as can be seen in a February 19, 2019 news item on Nanowerk,

A boost of vitamin C helped Rice University scientists turn small gold nanorods into fine gold nanowires.

Common, mild ascorbic acid is the not-so-secret sauce that helped the Rice lab of chemist Eugene Zubarev grow pure batches of nanowires from stumpy nanorods without the drawbacks of previous techniques.

“There’s no novelty per se in using vitamin C to make gold nanostructures because there are many previous examples,” Zubarev said. “But the slow and controlled reduction achieved by vitamin C is surprisingly suitable for this type of chemistry in producing extra-long nanowires.”

A February 19, 2019 Rice University news release (also on EurekAlert), which originated the news item, provides more technical detail about the research

The Rice lab’s nanorods are about 25 nanometers thick at the start of the process – and remain that way while their length grows to become long nanowires. Above 1,000 nanometers in length, the objects are considered nanowires, and that matters. The wires’ aspect ratio – length over width – dictates how they absorb and emit light and how they conduct electrons. Combined with gold’s inherent metallic properties, that could enhance their value for sensing, diagnostic, imaging and therapeutic applications.

Zubarev and lead author Bishnu Khanal, a Rice chemistry alumnus, succeeded in making their particles go far beyond the transition from nanorod to nanowire, theoretically to unlimited length.

The researchers also showed the process is fully controllable and reversible. That makes it possible to produce nanowires of any desired length, and thus the desired configuration for electronic or light-manipulating applications, especially those that involve plasmons, the light-triggered oscillation of electrons on a metal’s surface.

The nanowires’ plasmonic response can be tuned to emit light from visible to infrared and theoretically far beyond, depending on their aspect ratios.

The process is slow, so it takes hours to grow a micron-long nanowire. “In this paper, we only reported structures up to 4 to 5 microns in length,” Zubarev said. “But we’re working to make much longer nanowires.”

The growth process only appeared to work with pentahedrally twinned gold nanorods, which contain five linked crystals. These five-sided rods — “Think of a pencil, but with five sides instead of six,” Zubarev said — are stable along the flat surfaces, but not at the tips.

“The tips also have five faces, but they have a different arrangement of atoms,” he said. “The energy of those atoms is slightly lower, and when new atoms are deposited there, they don’t migrate anywhere else.”

That keeps the growing wires from gaining girth. Every added atom increases the wire’s length, and thus the aspect ratio.

The nanorods’ reactive tips get help from a surfactant, CTAB, that covers the flat surfaces of nanorods. “The surfactant forms a very dense, tight bilayer on the sides, but it cannot cover the tips effectively,” Zubarev said.

That leaves the tips open to an oxidation or reduction reaction. The ascorbic acid provides electrons that combine with gold ions and settle at the tips in the form of gold atoms. And unlike carbon nanotubes in a solution that easily aggregate, the nanowires keep their distance from one another.

“The most valuable feature is that it is truly one-dimensional elongation of nanorods to nanowires,” Zubarev said. “It does not change the diameter, so in principal we can take small rods with an aspect ratio of maybe two or three and elongate them to 100 times the length.”
He said the process should apply to other metal nanorods, including silver.

Here’s a link to and a citation for the paper,

Chemical Transformation of Nanorods to Nanowires: Reversible Growth and Dissolution of Anisotropic Gold Nanostructures by Bishnu P. Khanal and Eugene R. Zubarev. ACS Nano, 2019, 13 (2), pp 2370–2378 DOI: 10.1021/acsnano.8b09203 Publication Date (Web): February 12, 2019

Copyright © 2019 American Chemical Society

This paper is behind a paywall. Below you’ll find an image fo what I believe to be the vitamin C-enhanced gold nanowires.

Caption: Gold nanowires grown in the Rice University lab of chemist Eugene Zubarev promise to provide tunable plasmonic properties for optical and electronic applications. The wires can be controllably grown from nanorods, or reduced. Credit: Zubarev Research Group/Rice University

Quantum dots as pollen labels: tracking pollinators

Caption: This bee was caught after it visited a flower of which the pollen grains were labelled with quantum dots. Under the microscope one can see where the pollen was placed, and actually determine which insects carry the most pollen from which flower. Credit: Corneile Minnaar

Fascinating, yes? Next, the news and, then, the video about the research,

A February 14, 2019 news item on ScienceDaily announces research from South Africa,

A pollination biologist from Stellenbosch University in South Africa is using quantum dots to track the fate of individual pollen grains. This is breaking new ground in a field of research that has been hampered by the lack of a universal method to track pollen for over a century.

A February 13, 2019 Stellenbosh University press release (also on EurekAlert but published February 14, 2019) by Wiida Fourie-Basson, which originated the news item, expands on the theme,

In an article published in the journal Methods in Ecology and Evolution this week, Dr Corneile Minnaar describes this novel method, which will enable pollination biologists to track the whole pollination process from the first visit by a pollinator to its endpoint – either successfully transferred to another flower’s stigma or lost along the way.

Despite over two hundred years of detailed research on pollination, Minnaar says, researchers do not know for sure where most of the microscopically tiny pollen grains actually land up once they leave flowers: “Plants produce massive amounts of pollen, but it looks like more than 90% of it never reaches stigmas. For the tiny fraction of pollen grains that make their way to stigmas, the journey is often unclear–which pollinators transferred the grains and from where?”

Starting in 2015, Minnaar decided to tread where many others have thus far failed, and took up the challenge through his PhD research in the Department of Botany and Zoology at Stellenbosch University (SU).

“Most plant species on earth are reliant on insects for pollination, including more than 30% of the food crops we eat. With insects facing rapid global decline, it is crucial that we understand which insects are important pollinators of different plants–this starts with tracking pollen,” he explains.

He came upon the idea for a pollen-tracking method after reading an article on the use of quantum dots to track cancer cells in rats (https://doi.org/10.1038/nbt994). Quantum dots are semiconductor nanocrystals that are so small, they behave like artificial atoms. When exposed to UV light, they emit extremely bright light in a range of possible colours. In the case of pollen grains, he figured out that quantum dots with “fat-loving” (lipophilic) ligands would theoretically stick to the fatty outer layer of pollen grains, called pollenkitt, and the glowing colours of the quantum dots can then be used to uniquely “label” pollen grains to see where they end up.

The next step was to find a cost-effective way to view the fluorescing pollen grains under a field dissection microscope. At that stage Minnaar was still using a toy pen from a family restaurant with a little UV LED light that he borrowed from one of his professors.
“I decided to design a fluorescence box that can fit under a dissection microscope. And, because I wanted people to use this method, I designed a box that can easily be 3D-printed at a cost of about R5,000, including the required electronic components.” (view video at https://youtu.be/YHs925F13t0

[or you can scroll down to the bottom of this post]

So far, the method and excitation box have proven itself as an easy and relatively inexpensive method to track individual pollen grains: “I’ve done studies where I caught the insects after they have visited the plant with quantum-dot labelled anthers, and you can see where the pollen is placed, and which insects actually carry more or less pollen.”
But the post-labelling part of the work still requires hours and hours of painstaking counting and checking: “I think I’ve probably counted more than a hundred thousand pollen grains these last three years,” he laughs.

As a postdoctoral fellow in the research group of Prof Bruce Anderson in the Department of Botany and Zoology at Stellenbosch University, Minnaar will continue to use the method to investigate the many unanswered questions in this field.

Here’s a link to and a citation for the paper,

Using quantum dots as pollen labels to track the fates of individual pollen grains by Corneile Minnaar and Bruce Anderson. Methods in Ecology and Evolution DOI: https://doi.org/10.1111/2041-210X.13155 First published: 25 January 2019

This paper is behind a paywall.

Here is the video,

Desalination waste as a useful resource?

For anyone not familiar with the concept, it’s possible to remove salt from water to make it potable (i.e., drinkable). With growing concerns about water shortages worldwide, turning the ocean into something drinkable is seen as a reasonable solution. One of the problems associated with the solution is waste. As you can see in this post, it’s a big problem.

Illustration depicts the potential of the suggested process. Brine, which could be obtained from the waste stream of reverse osmosis (RO) desalination plants, or from industrial plants or salt mining operations, can be processed to yield useful chemicals such as sodium hydroxide (NaOH) or hydrochloric acid (HCl). Credit: Illustration courtesy of the researchers [downloaded from https://www.sciencedaily.com/releases/2019/02/190213124439.htm]

A February 13, 2019 news item on ScienceDaily announced research from MIT (Massachusetts Institute of Technology) into research on desalination and waste,

The rapidly growing desalination industry produces water for drinking and for agriculture in the world’s arid coastal regions. But it leaves behind as a waste product a lot of highly concentrated brine, which is usually disposed of by dumping it back into the sea, a process that requires costly pumping systems and that must be managed carefully to prevent damage to marine ecosystems. Now, engineers at MIT say they have found a better way.

In a new study, they show that through a fairly simple process the waste material can be converted into useful chemicals — including ones that can make the desalination process itself more efficient

A February 13, 2019 MIT news release (also on EurekAlert), which originated the news item, describes the work in detail,

The approach can be used to produce sodium hydroxide, among other products. Otherwise known as caustic soda, sodium hydroxide can be used to pretreat seawater going into the desalination plant. This changes the acidity of the water, which helps to prevent fouling of the membranes used to filter out the salty water — a major cause of interruptions and failures in typical reverse osmosis desalination plants.

The concept is described today in the journal Nature Catalysis and in two other papers by MIT research scientist Amit Kumar, professor of mechanical engineering John. [sic] H. Lienhard V, and several others. Lienhard is the Jameel Professor of Water and Food and the director of the Abdul Latif Jameel Water and Food Systems Lab.

“The desalination industry itself uses quite a lot of it,” Kumar says of sodium hydroxide. “They’re buying it, spending money on it. So if you can make it in situ at the plant, that could be a big advantage.” The amount needed in the plants themselves is far less than the total that could be produced from the brine, so there is also potential for it to be a saleable product.

Sodium hydroxide is not the only product that can be made from the waste brine: Another important chemical used by desalination plants and many other industrial processes is hydrochloric acid, which can also easily be made on site from the waste brine using established chemical processing methods. The chemical can be used for cleaning parts of the desalination plant, but is also widely used in chemical production and as a source of hydrogen.

Currently, the world produces more than 100 billion liters (about 27 billion gallons) a day of water from desalination, which leaves a similar volume of concentrated brine. [emphases mine] Much of that is pumped back out to sea, and current regulations require costly outfall systems to ensure adequate dilution of the salts. Converting the brine can thus be both economically and ecologically beneficial, especially as desalination continues to grow rapidly around the world. “Environmentally safe discharge of brine is manageable with current technology, but it’s much better to recover resources from the brine and reduce the amount of brine released,” Lienhard says.

The method of converting the brine into useful products uses well-known and standard chemical processes, including initial nanofiltration to remove undesirable compounds, followed by one or more electrodialysis stages to produce the desired end product. While the processes being suggested are not new, the researchers have analyzed the potential for production of useful chemicals from brine and proposed a specific combination of products and chemical processes that could be turned into commercial operations to enhance the economic viability of the desalination process, while diminishing its environmental impact.

“This very concentrated brine has to be handled carefully to protect life in the ocean, and it’s a resource waste, and it costs energy to pump it back out to sea,” so turning it into a useful commodity is a win-win, Kumar says. And sodium hydroxide is such a ubiquitous chemical that “every lab at MIT has some,” he says, so finding markets for it should not be difficult.

The researchers have discussed the concept with companies that may be interested in the next step of building a prototype plant to help work out the real-world economics of the process. “One big challenge is cost — both electricity cost and equipment cost,” at this stage, Kumar says.

The team also continues to look at the possibility of extracting other, lower-concentration materials from the brine stream, he says, including various metals and other chemicals, which could make the brine processing an even more economically viable undertaking.

“One aspect that was mentioned … and strongly resonated with me was the proposal for such technologies to support more ‘localized’ or ‘decentralized’ production of these chemicals at the point-of-use,” says Jurg Keller, a professor of water management at the University of Queensland in Australia, who was not involved in this work. “This could have some major energy and cost benefits, since the up-concentration and transport of these chemicals often adds more cost and even higher energy demand than the actual production of these at the concentrations that are typically used.”

The research team also included MIT postdoc Katherine Phillips and undergraduate Janny Cai, and Uwe Schroder at the University of Braunschweig, in Germany. The work was supported by Cadagua, a subsidiary of Ferrovial, through the MIT Energy Initiative.

Here’s a link to and a citation for the paper,

Direct electrosynthesis of sodium hydroxide and hydrochloric acid from brine streams by Amit Kumar, Katherine R. Phillips, Gregory P. Thiel, Uwe Schröder, & John H. Lienhard V. Nature Catalysis volume 2, pages106–113 (2019) DOI: https://doi.org/10.1038/s41929-018-0218-y Published 13 February 2019

This paper is behind a paywall.

A fire-retardant coating made of renewable nanocellulose materials

Firefighters everywhere are likely to appreciate the efforts of researchers at Texas A&M University (US) to a develop a non-toxic fire retardant coating. From a February 12, 2019 news item on Nanowerk (Note: A link has been removed),

Texas A&M University researchers are developing a new kind of flame-retardant coating using renewable, nontoxic materials readily found in nature, which could provide even more effective fire protection for several widely used materials.

Dr. Jaime Grunlan, the Linda & Ralph Schmidt ’68 Professor in the J. Mike Walker ’66 Department of Mechanical Engineering at Texas A&M, led the recently published research that is featured on the cover of a recent issue of the journal Advanced Materials Interfaces (“Super Gas Barrier and Fire Resistance of Nanoplatelet/Nanofibril Multilayer Thin Films”).

Successful development and implementation of the coating could provide better fire protection to materials including upholstered furniture, textiles and insulation.

“These coatings offer the opportunity to reduce the flammability of the polyurethane foam used in a variety of furniture throughout most people’s homes,” Grunlan noted.

A February 8, 2019 Texas A&M University news release (also on EurekAlert) by Steve Kuhlmann, which originated the news item, describes the work being done in collaboration with a Swedish team in more detail,

The project is a result of an ongoing collaboration between Grunlan and a group of researchers at KTH Royal Institute of Technology in Stockholm, Sweden, led by Lars Wagberg. The group, which specializes in utilizing nanocellulose, provided Grunlan with the ingredients he needed to complement his water-based coating procedure.

In nature, both the cellulose – a component of wood and various sea creatures – and clay – a component in soil and rock formations – act as mechanical reinforcements for the structures in which they are found.

“The uniqueness in this current study lies in the use of two naturally occurring nanomaterials, clay nanoplatelets and cellulose nanofibrils,” Grunlan said. “To the best of our knowledge, these ingredients have never been used to make a heat shielding or flame-retardant coating as a multilayer thin film deposited from water.”

Among the benefits gained from using this method include the coating’s ability to create an excellent oxygen barrier to plastic films – commonly used for food packaging – and better fire protection at a lower cost than other, more toxic ingredients traditionally used flame-retardant treatments.

To test the coatings, Grunlan and his colleagues applied the flexible polyurethane foam – often used in furniture cushions – and exposed it to fire using a butane torch to determine the level of protection the compounds provided.

While uncoated polyurethane foam immediately melts when exposed to flame, the foam treated with the researchers’ coating prevented the fire from damaging any further than surface level, leaving the foam underneath undamaged.

“The nanobrick wall structure of the coating reduces the temperature experienced by the underlying foam, which delays combustion,” Grunlan said. “This coating also serves to promote insulating char formation and reduces the release of fumes that feed a fire.”

With the research completed, Grunlan said the next step for the overall flame-retardant project is to transition the methods into industry for implementation and further development. 

Here’s a link to and a citation for the paper,

Super Gas Barrier and Fire Resistance of Nanoplatelet/Nanofibril Multilayer Thin Films by Shuang Qin, Maryam Ghanad Pour, Simone Lazar, Oruç Köklükaya, Joseph Gerringer, Yixuan Song, Lars Wågberg, Jaime C. Grunlan. Advanced Materials Interfaces Volume 6, Issue 2 January 23, 2019 1801424 DOI: https://doi.org/10.1002/admi.201801424 First published online: 16 November 2018

This paper is behind a paywall.

Let them (Rice University scientists) show you how to restore oil-soaked soil

I did not want to cash in (so to speak) on someone else’s fun headline so I played with it. Hre is the original head, which was likely written by either David Ruth or Mike Williams at Rice University (Texas, US), “Lettuce show you how to restore oil-soaked soil.”

A February 1, 2019 news item on ScienceDaily on the science behind lettuce and oil-soaked soil,

Rice University engineers have figured out how soil contaminated by heavy oil can not only be cleaned but made fertile again.

How do they know it works? They grew lettuce.

Rice engineers Kyriacos Zygourakis and Pedro Alvarez and their colleagues have fine-tuned their method to remove petroleum contaminants from soil through the age-old process of pyrolysis. The technique gently heats soil while keeping oxygen out, which avoids the damage usually done to fertile soil when burning hydrocarbons cause temperature spikes.

Lettuce growing in once oil-contaminated soil revived by a process developed by Rice University engineers. The Rice team determined that pyrolyzing oil-soaked soil for 15 minutes at 420 degrees Celsius is sufficient to eliminate contaminants while preserving the soil’s fertility. The lettuce plants shown here, in treated and fertilized soil, showed robust growth over 14 days. Photo by Wen Song

A February 1, 2019 Rice University news release (also on EurekAlert), which originated the news item, explains more about the work,

While large-volume marine spills get most of the attention, 98 percent of oil spills occur on land, Alvarez points out, with more than 25,000 spills a year reported to the Environmental Protection Agency. That makes the need for cost-effective remediation clear, he said.

“We saw an opportunity to convert a liability, contaminated soil, into a commodity, fertile soil,” Alvarez said.

The key to retaining fertility is to preserve the soil’s essential clays, Zygourakis said. “Clays retain water, and if you raise the temperature too high, you basically destroy them,” he said. “If you exceed 500 degrees Celsius (900 degrees Fahrenheit), dehydration is irreversible.

The researchers put soil samples from Hearne, Texas, contaminated in the lab with heavy crude, into a kiln to see what temperature best eliminated the most oil, and how long it took.

Their results showed heating samples in the rotating drum at 420 C (788 F) for 15 minutes eliminated 99.9 percent of total petroleum hydrocarbons (TPH) and 94.5 percent of polycyclic aromatic hydrocarbons (PAH), leaving the treated soils with roughly the same pollutant levels found in natural, uncontaminated soil.

The paper appears in the American Chemical Society journal Environmental Science and Technology. It follows several papers by the same group that detailed the mechanism by which pyrolysis removes contaminants and turns some of the unwanted hydrocarbons into char, while leaving behind soil almost as fertile as the original. “While heating soil to clean it isn’t a new process,” Zygourakis said, “we’ve proved we can do it quickly in a continuous reactor to remove TPH, and we’ve learned how to optimize the pyrolysis conditions to maximize contaminant removal while minimizing soil damage and loss of fertility.

“We also learned we can do it with less energy than other methods, and we have detoxified the soil so that we can safely put it back,” he said.

Heating the soil to about 420 C represents the sweet spot for treatment, Zygourakis said. Heating it to 470 C (878 F) did a marginally better job in removing contaminants, but used more energy and, more importantly, decreased the soil’s fertility to the degree that it could not be reused.

“Between 200 and 300 C (392-572 F), the light volatile compounds evaporate,” he said. “When you get to 350 to 400 C (662-752 F), you start breaking first the heteroatom bonds, and then carbon-carbon and carbon-hydrogen bonds triggering a sequence of radical reactions that convert heavier hydrocarbons to stable, low-reactivity char.”

The true test of the pilot program came when the researchers grew Simpson black-seeded lettuce, a variety for which petroleum is highly toxic, on the original clean soil, some contaminated soil and several pyrolyzed soils. While plants in the treated soils were a bit slower to start, they found that after 21 days, plants grown in pyrolyzed soil with fertilizer or simply water showed the same germination rates and had the same weight as those grown in clean soil.

“We knew we had a process that effectively cleans up oil-contaminated soil and restores its fertility,” Zygourakis said. “But, had we truly detoxified the soil?”

To answer this final question, the Rice team turned to Bhagavatula Moorthy, a professor of neonatology at Baylor College of Medicine, who studies the effects of airborne contaminants on neonatal development. Moorthy and his lab found that extracts taken from oil-contaminated soils were toxic to human lung cells, while exposing the same cell lines to extracts from treated soils had no adverse effects. The study eased concerns that pyrolyzed soil could release airborne dust particles laced with highly toxic pollutants like PAHs.

”One important lesson we learned is that different treatment objectives for regulatory compliance, detoxification and soil-fertility restoration need not be mutually exclusive and can be simultaneously achieved,” Alvarez said.

Here’s a link to and a citation for the paper,

Pilot-Scale Pyrolytic Remediation of Crude-Oil-Contaminated Soil in a Continuously-Fed Reactor: Treatment Intensity Trade-Offs by Wen Song, Julia E. Vidonish, Roopa Kamath, Pingfeng Yu, Chun Chu, Bhagavatula Moorthy, Baoyu Gao, Kyriacos Zygourakis, and Pedro J. J. Alvarez. Environ. Sci. Technol., 2019, 53 (4), pp 2045–2053 DOI: 10.1021/acs.est.8b05825 Publication Date (Web): January 25, 2019

Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Nanoparticle computing

I’m fascinated with this news and I’m pretty sure it’s my first exposure to nanoparticle computing so am quite excited about this ‘discovery of mine’.

A February 25, 2019 news item on Nanowerk announces the research from Korean scientists,

Computation is a ubiquitous concept in physical sciences, biology, and engineering, where it provides many critical capabilities. Historically, there have been ongoing efforts to merge computation with “unusual” matters across many length scales, from microscopic droplets (Science 315, 832, 2007) to DNA nanostructures (Science 335, 831, 2012; Nat. Chem. 9, 1056, 2017) and molecules (Science 266, 1021, 1994; Science 314, 1585, 2006; Nat. Nanotech. 2, 399, 2007; Nature 375, 368, 2011).

However, the implementation of complex computation in particle systems, especially in nanoparticles, remains challenging, despite a wide range of potential applications that would benefit from algorithmically controlling their unique and potentially useful intrinsic features (such as photonic, plasmonic, catalytic, photothermal, optoelectronic, electrical, magnetic and material properties) without human interventions.

This challenge is not due to the lack of sophistication in the the current state-of-the-art of stimuli-responsive nanoparticles, many of which can conceptually function as elementary logic gates. This is mostly due to the lack of scalable architectures that would enable systematic integration and wiring of the gates into a large integrated circuit.

Previous approaches are limited to (i) demonstrating one simple logic operation per test tube or (ii) relying on complicated enzyme-based molecular circuits in solution. It should be also noted that modular and scalable aspects are key challenges in DNA computing for practical and widespread use.

A February 23, 2019 Seoul National University press release on EurekAlert, which originated the news items, dives into more detail,

In nature, the cell membrane is analogous to a circuit board, as it organizes a wide range of biological nanostructures (e.g. proteins) as (computational) units and allows them to dynamically interact with each other on the fluidic 2D surface to carry out complex functions as a network and often induce signaling intracellular signaling cascades. For example, the membrane proteins on the membrane take chemical/physical cues as inputs (e.g. binding with chemical agents, mechanical stimuli) and change their conformations and/or dimerize as outputs. Most importantly, such biological “computing” processes occur in a massively parallel fashion. Information processing on living cell membranes is a key to how biological systems adapt to changes in external environments.

This manuscript reports the development of a nanoparticle-lipid bilayer hybrid-based computing platform termed lipid nanotablet (LNT), in which nanoparticles, each programmed with surface chemical ligands (DNA in this case), are tethered to a supported lipid bilayer to carry out computation. Taking inspirations from parallel computing processes on cellular membranes, we exploited supported lipid bilayers (SLBs)–synthetic mimics for cell surfaces–as chemical circuit boards to construct nanoparticle circuits. This “nano?bio” computing, which occurs at the interface of nanostructures and biomolecules, translates molecular information in solution (input) into dynamic assembly/disassembly of nanoparticles on a lipid bilayer (output).

We introduced two types of nanoparticles to a lipid bilayer that differ in mobility: mobile Nano-Floaters and immobile Nano-Receptors. Due to high mobility, floaters actively interact with receptors across space and time, functioning as active units of computation. The nanoparticles are functionalized with specially designed DNA [deoxyribonucleic acid] ligands, and the surface ligands render receptor-floater interactions programmable, thereby transforming a pair of receptor and floater into a logic gate. A nanoparticle logic gate takes DNA strands in solution as inputs and generates nanoparticle assembly or disassembly events as outputs. The nanoparticles and their interactions can be imaged and tracked by dark-field microscopy with single-nanoparticle resolution because of strong and stable scattering signals from plasmonic nanoparticles. Using this approach (termed “interface programming”), we first demonstrated that a pair of nanoparticles (that is, two nanoparticles on a lipid bilayer) can carry out AND, OR, INHIBIT logic operations and take multiple inputs (fan-in) and generate multiple outputs (fan-out). Also, multiple logic gates can be modularly wired with AND or OR logic via floaters, as the mobility of floaters enables the information cascade among several nanoparticle logic gates. We termed this strategy “network programming.” By combining these two strategies (interfacial and network programming), we were able to implement complex logic circuits such as multiplexer.

The most important contributions of our paper are the conceptual one and the major advances in modular and scalable molecular computing (DNA computing in this case). LNT platform, for the first time, introduces the idea of using lipid bilayer membranes as key components for information processing. As the two-dimensional (2D) fluidic lipid membrane is bio-compatible and chemically modifiable, any nanostructures can be potentially introduced and used as computing units. When tethered to the lipid bilayer “chip”, these nanostructures can be visualized and become controllable at the single-particle level; this dimensionality reduction, bringing the nanostructures from freely diffusible solution phase (3D) to fluidic membrane (2D), transforms a collection of nanostructures into a programmable, analyzable reaction network. Moreover, we also developed a digitized imaging method and software for quantitative and massively parallel analysis of interacting nanoparticles. In addition, LNT platform provides many practical merits to current state-of-the-art in molecular computing and nanotechnology. On LNT platforms, a network of nanoparticles (each with unique and beneficial properties) can be design to autonomously respond to molecular information; such capability to algorithmically control nanoparticle networks will be very useful for addressing many challenges with molecular computing and developing new computing platforms. As the title of our manuscript suggests, this nano-bio computing will lead to exciting opportunities in biocomputation, nanorobotics, DNA nanotechnology, artificial bio-interfaces, smart biosensors, molecular diagnostics, and intelligent nanomaterials. In summary, the operating and design principles of lipid nanotablet platform are as follows:

(1) LNT uses single nanoparticles as units of computation. By tracking numerous nanoparticles and their actions with dark-field microscopy at the single-particle level, we could treat a single nanoparticle as a two-state device representing a bit. A nanoparticle provides a discrete, in situ optical readout of its interaction (e.g. association or dissociation) with another particle as an output of logic computation.

(2) Nanoparticles on LNT function as Boolean logic gates. We exploited the programmable bonding interaction within particle-particle interfaces to transform two interacting nanoparticles into a Boolean logic gate. The gate senses single-stranded DNA as inputs and triggers an assembly or disassembly reaction of the pair as an output. We demonstrated two-input AND, two-input OR and INHIBIT logic operations, and fan-in/fan-out of logic gates.

(3) LNT enables modular wiring of multiple nanoparticle logic gates into a combinational circuit. We exploited parallel, single-particle imaging to program nanoparticle networks and thereby wire multiple logic gates into a combinational circuit. We demonstrate a multiplexer MUX2to1 circuit built from the network wiring rules.

Here’s a link to and a citation for the team’s latest paper,

Nano-bio-computing lipid nanotablet by Jinyoung Seo, Sungi Kim, Ha H. Park, Da Yeon Choi, and Jwa-Min Nam. Science Advances 22 Feb 2019: Vol. 5, no. 2, eaau2124 DOI: 10.1126/sciadv.aau2124

This paper appears to be open access.