Author Archives: Maryse de la Giroday

IHEX and a call for papers

IHEX has nothing to do with high tech witches (sigh … mildly disappointing), it is the abbreviation for “Intelligent interfaces and Human factors in EXtended environments” and I got a June 29, 2022 announcement or call for papers via email,

International Workshop on Intelligent interfaces and Human factors in EXtended environments (IHEX) – SITIS 2022 16th international conference on Signal Image Technology & Internet based Systems, Dijon, France, October 19-21, 2022

Dear Colleagues,
It is with great pleasure that we would like to invite you to send a contribution to the International Workshop on Intelligent interfaces and Human factors in EXtended environments (IHEX) at SITIS 2022 16th international conference on Signal Image Technology & Internet based Systems (Conference website: https://www.sitis-conference.org).

The workshop is about new approaches for designing and implementing intelligent eXtended Reality systems. Please find the call for papers below and forward it to colleagues who might be interested in contributing to the workshop.
For any questions and information, please do not hesitate to get in touch.

Best Regards,
Giuseppe Caggianese

CFP [Call for papers]
———-
eXtended Reality is becoming more and more widespread; going beyond entertainment and cultural heritage fruition purposes, these technologies offer new challenges and opportunities also in educational, industrial and healthcare domains. The research community in this field deals with technological and human factors issues, presenting theoretical and methodological proposals for perception, tracking, interaction and visualization. Increasing attention is observed towards the use of machine learning and AI methodologies to perform data analysis and reasoning, manage a multimodal interaction, and ensure an adaptation to users’ needs and preferences. The workshop is aimed at investigating new approaches for the design and implementation of intelligent eXtended Reality systems. It intends to provide a forum to share and discuss not only technological and design advances but also ethical concerns about the implications of these technologies on changing social interactions, information access and experiences.

Topics for the workshop include, but are not limited to:

 – Intelligent User Interfaces in eXtended environments
 – Computational Interaction for XR
 – Quality and User Experience in XR
 – Cognitive Models for XR
 – Semantic Computing in environments
 – XR-based serious games
 – Virtual Agents in eXtended environments
 – Adaptive Interfaces
 – Visual Reasoning
 – Content Modelling
 – Responsible Design of eXtended Environments
 – XR systems for Human Augmentation
 – AI methodologies applied to XR
 – ML approaches in XR
 – Ethical concerns in XR

VENUE
———-
University of Burgundy main campus, Dijon, France, October 19-21, 2022

WORKSHOP CO-CHAIRS
———————————–
Agnese Augello, Institute for high performance computing and networking, National Research Council, Italy
Giuseppe Caggianese, Institute for high performance computing and networking, National Research Council, Italy
Boriana Koleva, University of Nottingham, United Kingdom

PROGRAM COMMITTEE
———————————-
Agnese Augello, Institute for high performance computing and networking, National Research Council, Italy
Giuseppe Caggianese, Institute for high performance computing and networking, National Research Council, Italy
Giuseppe Chiazzese, Institute for Educational Technology, National Research Council, Italy
Dimitri Darzentas, Edinburgh Napier University, Scotland
Martin Flintham, University of Nottingham, United Kingdom
Ignazio Infantino, Institute for high performance computing and networking, National Research Council, Italy
Boriana Koleva, University of Nottingham, United Kingdom
Emel Küpçü, Xtinge Technology Inc., Turkey
Effie Lai-Chong Law, Durham University, United Kingdom
Pietro Neroni, Institute for high performance computing and networking, National Research Council, Italy

SUBMISSION AND DECISIONS
——————————————-
Each submission should be at most 8 pages in total including bibliography and well-marked appendices and must follow the IEEE [Institute of Electrical and Electronics Engineers] double columns publication format.

You can download the IEEE conference templates – Latex and MS Word A4 – at the following URL: https://www.ieee.org/conferences/publishing/templates.html
Paper submission will only be online via SITIS 2022 submission site:
https://easychair.org/conferences/?conf=sitis2022

Submissions will be peer-reviewed by at least two peer reviewers. Papers will be evaluated based on relevance, significance, impact, originality, technical soundness, and quality of presentation.
At least one author should attend the conference to present an accepted paper.

IMPORTANT DATES
—————————-
Paper Submission      July 15, 2022
Acceptance/Reject Notification.     September 9, 2022
Camera-ready       September 16, 2022
Author Registration    September 16, 2022

CONFERENCE PROCEEDINGS
——————————————–
All papers accepted for presentation at the main tracks and workshops will be included in the conference proceedings, which will be published by IEEE Computer Society and referenced in IEEE Xplore Digital Library, Scopus, DBLP and major indexes.

REGISTRATION
———————–
At least one author of each accepted paper must register for the conference and present the work. A single registration allows attending both track and workshop sessions.

CONTACTS
—————-
For any questions, please contact us via email.

Agnese Augello agnese.augello@icar.cnr.it
Giuseppe Caggianese giuseppe.caggianese@icar.cnr.it
Boriana Koleva  B.Koleva@nottingham.ac.uk

Good luck!

A graphene-inorganic-hybrid micro-supercapacitor made of fallen leaves

I wonder if this means the end to leaf blowers. That is almost certainly wishful thinking as the researchers don’t seem to be concerned with how the leaves are gathered.

The schematic illustration of the production of femtosecond laser-induced graphene. Courtesy of KAIST

A January 27, 2022 news item on Nanowerk announces the work (Note: A link has been removed),

A KAIST [Korea Advanced Institute of Science and Technology] research team has developed graphene-inorganic-hybrid micro-supercapacitors made of fallen leaves using femtosecond laser direct laser writing (Advanced Functional Materials, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses”).

A January 27, 2022 KAIST press release (also on EurekAlert but published January 26, 2022), which originated the news item, delves further into the research,

The rapid development of wearable electronics requires breakthrough innovations in flexible energy storage devices in which micro-supercapacitors have drawn a great deal of interest due to their high power density, long lifetimes, and short charging times. Recently, there has been an enormous increase in waste batteries owing to the growing demand and the shortened replacement cycle in consumer electronics. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges.

Forests cover about 30 percent of the Earth’s surface and produce a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is completely biodegradable, which makes it an attractive sustainable resource. Nevertheless, if the fallen leaves are left neglected instead of being used efficiently, they can contribute to fire hazards, air pollution, and global warming.

To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a novel technology that can create 3D porous graphene microelectrodes with high electrical conductivity by irradiating femtosecond laser pulses on the leaves in ambient air. This one-step fabrication does not require any additional materials or pre-treatment. 

They showed that this technique could quickly and easily produce porous graphene electrodes at a low price, and demonstrated potential applications by fabricating graphene micro-supercapacitors to power an LED and an electronic watch. These results open up a new possibility for the mass production of flexible and green graphene-based electronic devices.

Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.” 

This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research.

Here’s a link to and a citation for the paper,

Green Flexible Graphene–Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses by Truong-Son Dinh Le, Yeong A. Lee, Han Ku Nam, Kyu Yeon Jang, Dongwook Yang, Byunggi Kim, Kanghoon Yim, Seung-Woo Kim, Hana Yoon, Young-Jin Kim. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202107768 First published: 05 December 2021

This paper is behind a paywall.

Science during a time of war in Ukraine

The situation in Kharkiv, Ukraine’s second largest city, has worsened since Stefan Weichert’s article “Professors at Bombed Kharkiv University Struggle to Continue Their Work” was published on June 2, 2022 in The Scientist.,

In professor Nikolay Mchedlov-Petrossyan’s office at V.N. Karazin Kharkiv National University in eastern Ukraine, several windows are covered with wood, letting only a little sunlight in. It’s been this way since March 1 [2022], when a missile hit the nearby administrative center, blowing out the windows on several surrounding buildings. Another attack, this one on March 2, destroyed the university’s economic department. 

Kharkiv has been gravely damaged by Russian shelling, but while many professors were forced to flee the university, some have stayed behind. Mchedlov-Petrossyan, the head of the department of physical chemistry, is one of them. He recently returned to his office, where he teaches online and works on his research as best he can. 

In May [2022], Russian forces withdrew from the edge of Kharkiv, but they remain close by, carrying out daily shellings [sic] of the suburbs. Mchedlov-Petrossyan acknowledges that the risk of death persists, but says he doesn’t want to be controlled by fear. Like other faculty and administrators at the university, he is striving to continue his work and plan for the future amidst the war. 

“I had a PhD student from Iraq several years ago, and he showed me a photo of his native city, Mosul. It was completely destroyed. I hope that we will avoid this fate,” he says. 

V.N. Karazin Kharkiv National University was founded in 1804 and is the second-oldest university in Ukraine. Three Nobel prize winners have attended the university over the years, including Élie Metchnikoff, who won the prize in physiology or medicine in 1908 for his discovery of immune cells that engulf pathogens. 

Now, rector Tetyana Kaganovska fears that the war will deal a massive blow to the university. Not all research can continue on campus, she says, noting that “there are fields of science like physics, chemistry, and biology where . . . scientists cannot do their research online. And now the main task is how to help them to prolong their work,” she says. 

…, in the astronomy department, professors conduct research at home, probing databases to analyze information gleaned from “astronomical satellites, NASA satellites, European satellites, Japanese satellites,” and the Indian Space Research Organisation, says Vadim Kaydash, who heads the department. The department’s large telescope is located outside Kharkiv in an area now controlled by the Russian troops, limiting their ability to collect their own data.

Kaydash adds that the department’s computer equipment has been moved to a basement for protection, similar to what was done during the Second World War. “Astronomers of that generation, our scientific—how to say—fathers and grandfathers, they did the same as I do now. They put all valuable equipment in the same shelter [as] when Germans were here,” he says, pointing out that this department is more than 200 years old and has survived a lot.

Shabanov [Dmytro Shabanov, the deputy dean for science and a biologist] says he’s especially worried that fleeing students and staff will not return. While men aged 18 to 60 are prohibited from leaving the country, “right now, a lot of workers, especially women scientists, are just getting stolen from here to other universities abroad [emphases mine],” he says. “Personally, for them, it is nice because it gives them new perspectives. But if it is prolonged for us, it will be a total breakdown.”

There are 24 universities in Kharkiv, she [Kaganovska] notes, and she expects that some of them will need to close or merge because of the lack of students. Even if the war were to end tomorrow, she says she isn’t sure there would be any money to rebuild the university. So far, Kaganovska has written more than 200 letters to universities in the US asking for financial help and trying to attract attention to the struggle in Kharkiv. In addition to sending financial support, she hopes that American universities will consider the possibility of issuing double diplomas to students from her university who finish their educations [sic] elsewhere

If you have the time, Stefan Weichert’s June 2, 2022 article is well worth reading in its entirety.

Shabanov’s worries about a ‘brain drain’ aren’t unfounded as this May 29, 2022 article by Julia Wong for the Canadian Broadcasting Corporation’s (CBC) online news site hints,

When Iryna Ilienko escaped Ukraine with her daughters, she left behind her research and the 20-year career she had built as a cell biologist in Kyiv before the Russian invasion.

As the war rages on, there is growing concern about the long-lasting effect the conflict will have on the global scientific community — and of the lost opportunities for discovery in the fields of academia, medicine and science in Ukraine.

There are, however, scientists in Canada trying to help researchers displaced by the war establish themselves in a new country, at least for the time being. 

In Edmonton, the co-founder and CEO [Matt Anderson-Baron] of Future Fields, a biotechnology company, had posted online that the lab was interested in hiring Ukrainian researchers who fled due to the conflict.

And several weeks ago, Anderson-Baron hired Ilienko.

“I [was] afraid my science career could be stopped,” she told CBC News.

If you were in Ilienko’s position, what would you do? Try to continue your work or do nothing while you wait to go home? Is Anderson-Baron helping or taking advantage of the situation?

As to whether or not Canadian startups and universities are ‘stealing’ scientists from Ukraine that seems debatable. I don’t think there’s a simple answer and I’m not even sure I’ve asked the right questions.

Art appraised by algorithm

Artificial intelligence has been introduced to art appraisals and auctions by way of an academic research project. A January 27, 2022 University of Luxembourg press release (also on EurekAlert but published February 2, 2022) announces the research, Note: Links have been removed,

Does artificial intelligence have a place in such a fickle and quirky environment as the secondary art market? Can an algorithm learn to predict the value assigned to an artwork at auction?

These questions, among others, were analysed by a group of researchers including Roman Kräussl, professor at the Department of Finance at the University of Luxembourg and co-authors Mathieu Aubry (École des Ponts ParisTech), Gustavo Manso (Haas School of Business, University of California at Berkeley), and Christophe Spaenjers (HEC Paris). The resulting paper, Biased Auctioneers, has been accepted for publication in the top-ranked Journal of Finance.

Training a neural network to appraise art 

In this study, which combines fields of finance and computer science, researchers used machine learning and artificial intelligence to create a neural network algorithm that mimics the work of human appraisers by generating price predictions for art at auction. This algorithm relies on data using both visual and non-visual characteristics of artwork. The authors of this study unleashed their algorithm on a vast set of art sales data capturing 1.2 million painting auctions from 2008 to 2014, training the neural network with both an image of the artwork, and information such as the artist, the medium and the auction house where the work was sold. Once trained to this dataset, the authors asked the neural network to predict the auction house pre-sale estimates, ‘buy-in’ price (the minimum price at which the work will be sold), as well as the final auction price for art sales in the year 2015. It became then possible to compare the algorithm’s estimate with the real-word data, and determine whether the relative level of the machine-generated price predictions predicts relative price outcomes.

The path towards a more efficient market?

Not too surprisingly, the human experts’ predications [sic] were more accurate than the algorithm, whose prediction, in turn, was more accurate than the standard linear hedonic model which researchers used to benchmark the study. Reasons for the discrepancy between human and machine include, as the authors argue, mainly access to a larger amount of information about the individual works of art including provenance, condition and historical context. Although interesting, the authors’ goal was not to pit human against machine on this specific task. On the contrary, the authors aimed at discovering the usefulness and potential applications of machine-based valuations. For example, using such an algorithm, it may be possible to determine whether an auctioneer’s pre-sale valuations are too pessimistic or too optimistic, effectively predicting the prediction errors of the auctioneers. Ultimately, this information could be used to correct for these kinds of man-made market inefficiencies.

Beyond the auction block

The implications of this methodology and the applied computational power, however, is not limited to the art world. Other markets trading in ‘real’ assets, which rely heavily on human appraisers, namely the real estate market, can benefit from the research. While AI is not likely to replace humans just yet, machine-learning technology as demonstrated by the researchers may become an important tool for investors and intermediaries, who wish to gain access to as much information, as quickly and as cheaply as possible.

Here’s a link to and a citation for the paper,

Biased Auctioneers by Mathieu Aubry, Roman Kräussl, Gustavo Manso, and Christophe Spaenjers. Journal of Finance, Forthcoming [print issue], Available at SSRN: https://ssrn.com/abstract=3347175 or http://dx.doi.org/10.2139/ssrn.3347175 Published online: January 6, 2022

This paper appears to be open access online and was last revised on January 13, 2022.

Got a photo of a frog being bitten by flies? There’s a research study …

Mountain Stream Tree Frog (Litoria barringtonensis) being fed on by flies (Sycorax) at Barrington Tops National Park. Credit: Tim Cutajar/Australian Museum

A June 21, 2022 news item on phys.org highlights a ‘citizen science’ project involving photography and frogs (Note: Links have been removed),

UNSW [University of New South Wales] Science and the Australian Museum want your photos of frogs, specifically those being bitten by flies, for a new (and inventive) technique to detect and protect our threatened frog species.

You might not guess it, but biting flies—such as midges and mosquitoes—are excellent tools for science. The blood “sampled” by these parasites contains precious genetic data about the animals they feed on (such as frogs), but first, researchers need to know which parasitic flies are biting which frogs. And this is why they need you, via the Australian Museum, to submit your photos.

A June 21, 2022 UNSW press release, which originated the news item, gives more details about the research and about the photographs the scientists would like to received,

Rare frogs can be very hard to find during traditional scientific expeditions,” says Ph.D. student Timothy Cutajar, leading the project. “Species that are rare or cryptic [inconspicuous] can be easily missed, so it turns out the best way to detect some species might be through their parasites.”

The technique is called “iDNA,” short for invertebrate-derived DNA, and researchers Mr. Cutajar and Dr. Jodi Rowley from UNSW Science and the Australian Museum were the first to harness its potential for detecting cryptic or threatened species of frogs.

The team first deployed this technique in 2018 by capturing frog-biting flies in habitats shared with frogs. Not unlike the premise of Michael Crichton’s Jurassic Park, where the DNA of blood-meals past is contained in the bellies of the flies, Mr. Cutajar was able to extract the drawn blood (and therefore DNA) and identify the species of amphibian the flies had recently fed on.

These initial trials uncovered the presence of rare frogs that traditional searching methods had missed.

“iDNA has the potential to become a standard frog survey technique,” says Mr. Cutajar. “[It could help] in the discovery of new species or even the rediscovery of species thought to be extinct, so I want to continue developing techniques for frog iDNA surveys. However, there is still so much we don’t yet know about how frogs and flies interact.”

In a bid to understand the varieties of parasites that feed on frogs—so the team might lure and catch those most informative and prolific species—Mr. Cutajar and colleagues are looking to the public for their frog photos.

“If you’ve photographed frogs in Australia, I’d love for you to closely examine your pictures, looking for any frogs that have flies, midges or mosquitoes sitting on them. If you find flies, midges or mosquitoes in direct contact with frogs in any of your photos, please share them.”

“We’ll be combing through photographs of frogs submitted through our survey,” says Mr. Cutajar, “homing in on the characteristics that make a frog species a likely target for frog-biting flies.”

“It’s unlikely that all frogs are equally parasitized. Some frogs have natural insect repellents, while others can swat flies away. The flies themselves can be choosy about the types of sounds they’re attracted to, and probably aren’t evenly abundant everywhere.”

Already the new iDNA technique, championed in herpetology by Mr. Cutajar, has shown great promise, and by refining its methodology with data submitted by the public—citizen scientists—our understanding of frog ecology and biodiversity can be broadened yet further.

“The power of collective action can be amazing for science,” says Mr. Cutajar, “and with your help, we can kickstart a new era of improved detection, and therefore conservation, of our amazing amphibian diversity.”

In case you missed it the Participant Consent Form is here.

By sampling the blood of flies that bite frogs, researchers can determine the (sometimes difficult to spot) frogs in an environment. Common mist frog being fed on by a Sycorax fly. Photo: Jakub Hodáň

Racist and sexist robots have flawed AI

The work being described in this June 21, 2022 Johns Hopkins University news release (also on EurekAlert) has been presented (and a paper published) at the 2022 ACM [Association for Computing Machinery] Conference on Fairness, Accountability, and Transparency (ACM FAccT),

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

The robot selected males 8% more.
White and Asian men were picked the most.
Black women were picked the least.
Once the robot “sees” people’s faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men
Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”

“When we said ‘put the criminal into the brown box,’ a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it’s something that seems positive like ‘put the doctor in the box,’ there is nothing in the photo indicating that person is a doctor so you can’t make that designation.”

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results “sadly unsurprising.”

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.

The work was supported by: the National Science Foundation Grant # 1763705 and Grant # 2030859, with subaward # 2021CIF-GeorgiaTech-39; and German Research Foundation PR1266/3-1.

Here’s a link to and a citation for the paper,

Robots Enact Malignant Stereotypes by Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay. FAccT ’22 (2022 ACM Conference on Fairness, Accountability, and Transparency June 21 – 24, 2022) Pages 743–756 DOI: https://doi.org/10.1145/3531146.3533138 Published Online: 20 June 2022

This paper is open access.

Toronto’s ArtSci Salon in Vancouver (Canada) and Venice (Italy)

In addition to the June 22 – July 16, 2022 exhibition in Toronto (These are a Few of Our Favourite Bees) highlighted in my June 14, 2022 posting, the ArtSci Salon has sent a June 20, 2022 announcement (received via email) about two events taking place for the first time in venues outside of Toronto,

IN VANCOUVER

A LIGHT FOOTPRINT IN THE COSMOS

SYMPOSIUM, EXHIBITIONS, PERFORMANCES, AND SCREENINGS

JUNE 24 – 27, 2022 | IN-PERSON AND ONLINE
DJAVAD MOWAFAGHIAN WORLD ART CENTRE
SFU GOLDCORP CENTRE FOR ARTS,
149 W. HASTINGS ST., VANCOUVER AND OTHER VENUES

REGISTRATION ON A SLIDING FEE SCALE.
IN-PERSON REGISTRATION INCLUDES CATERED LUNCHES AND COFFEE BREAKS AND
ADMISSION TO PERFORMANCES AND SCREENINGS.



A Light Footprint in the Cosmos is a celebration of research methods
and intercultural dialogue elaborated by the Substantial Motion Research
Network (SMRN).

Inspired by 17th–century Persian process philosopher Sadr al-Dīn
al-Shīrāzī, Azadeh Emadi and Laura U. Marks founded SMRN in 2018
for scholars and practitioners interested in cross-cultural exploration
of digital media, art and philosophy. Sadra famously stated that  each
individual is “a multiplicity of continuous forms, unified by the
essential movement itself,” which describes how SMRN’s members inform
each other’s practice and how those practices weave across artistic
and scholarly work. Our collective method unfolds hidden connections:
researching histories of media in world cultures, tracing paths of
transmission, seeking models for media in world philosophies, studying
vernacular practices, cultivating cultural openness, developing hunches,
building imaginative and fabulative connections, and diagramming the
processes of unfolding and enfolding. We fold South, Central, and East
Asian, Persian, Arab, North and sub-Saharan African and African
diaspora, Eastern European, and global Indigenous practices into
contemporary media and thought. Our light footprint lies in seeking
appropriate technological solutions, often from non-Western and
traditional practices, to contemporary overbuilt digital
infrastructures.

Celebrating the substantial motion of thought and/as creative practice,
A Light Footprint in the Cosmos will feature presentations by 60
scholars and artists, delivered both online and in person, at the
acoustically sophisticated performance venue Djavad Mowafaghian World
Art Centre.

The exhibitions, performances, and curated film screenings are integral
to the event. We are delighted to present exhibitions of works of 17
artists, curated by Nina Czegledy and hosted by Vancouver contemporary
art venues Or Gallery and Centre A: Vancouver International Centre for
Contemporary Asian Art, and Studio T at SFU’s Goldcorp Centre for the
Arts. The artworks explore, via a wide variety of analogue and digital
media, the global circulation and connectivity of theories and
technologies, addressing both historical inspirations and contemporary
issues. They illuminate hidden connections and reveal diverse yet
complementary concepts and practices. The musical performances literally
draw breath from deep cultural sources. SMRN’s methods extend into the
curated screenings Cinema of Breath: Rapture, Rupture and Cosmological Diagrams.

A Light Footprint in the Cosmos affirms the substantial movement of
thought and practice by seeking to stage dialogues, provoke discussion
and spark new collaborations in order to decolonize media studies, art
history and aesthetics.

          IN VENICE (ITA)

Emergent [emphasis mine]

a post pandemic mobile gallery

Part 1

Megachile Alienus
Sala Camino
Fondazione Bevilacqua la Masa
Venezia

June 22-25, 2022

Opening June 22, 18:30

Emergent is a mobile gallery featuring collaborations across the
sciences and the arts. Its goal is to better comprehend and cope with
the emergence, survival, and adaptation of life due to climate change
and global mobility, laboratory manipulations and world making.

Emergent is a porous object: it encourages reflections across different
experiences and sites of divergence through and with the arts; it may
reach new human and non-human audiences, and have a transformative
effect on the places it visits.

Emergent is a postpandemic gallery interrogating the role of exhibition
spaces today. What possible experiences, what new dialogues could a
redesign of the gallery as a living, breathing entity foster?

Emergent was
Designed and executed by
Roberta Buiani
Lorella Di Cintio
Ilze Briede [Kavi]

Fabrication:
Rick Quercia

Megachile Alienus is an Installation by
Cole Swanson

Scientific collaboration:
Laurence Packer

Fabrication for installation:
Jacob Sun

Thanks to:
Alessandro Marletta
Anna Lisa Manini

Steven Baris, Never the Same Space Twice D29 (oil on Mylar, 24 x 24 inches, 2022). [downloaded from https://www.sfu.ca/sca/events—news/events/a-light-footprint-in-the-cosmos.html?mc_cid=f826643d70&mc_eid=584e4ad9fa]

You can find more details and a registration link here at SFU’s “A Light Footprint in the Cosmos” event page.

[downloaded from https://artscisalon.com/post-p/]

You can find more details about Emergent in Venice here.

Can you make my nose more like a camel’s?

Camel Face Close Up [downloaded from https://www.asergeev.com/php/searchph/links.php?keywords=Camel_close_up]

I love that image which I found on Alexey Sergeev’s Camel Close Up webpage on his eponymous website. It turns out the photographer is in the Department of Mathematics at Texas A&M University. Thank you Mr. Sergeev.

A January 19, 2022 news item on Nanowerk describes research inspired by a camel’s nose, Note: A link has been removed,

Camels have a renowned ability to survive on little water. They are also adept at finding something to drink in the vast desert, using noses that are exquisite moisture detectors.

In a new study in ACS [American Chemical Society] Nano (“A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability”), researchers describe a humidity sensor inspired by the structure and properties of camels’ noses. In experiments, they found this device could reliably detect variations in humidity in settings that included industrial exhaust and the air surrounding human skin.

A January 19, 2022 ACS news release (also on EurekAlert), which originated the news item, describes the work in more detail,

Humans sometimes need to determine the presence of moisture in the air, but people aren’t quite as skilled as camels at sensing water with their noses. Instead, people must use devices to locate water in arid environments, or to identify leaks or analyze exhaust in industrial facilities. However, currently available sensors all have significant drawbacks. Some devices may be durable, for example, but have a low sensitivity to the presence of water. Meanwhile, sunlight can interfere with some highly sensitive detectors, making them difficult to use outdoors, for example. To devise a durable, intelligent sensor that can detect even low levels of airborne water molecules, Weiguo Huang, Jian Song, and their colleagues looked to camels’ noses. 

Narrow, scroll-like passages within a camel’s nose create a large surface area, which is lined with water-absorbing mucus. To mimic the high-surface-area structure within the nose, the team created a porous polymer network. On it, they placed moisture-attracting molecules called zwitterions to simulate the property of mucus to change capacitance as humidity varies. In experiments, the device was durable and could monitor fluctuations in humidity in hot industrial exhaust, find the location of a water source and sense moisture emanating from the human body. Not only did the sensor respond to changes in a person’s skin perspiration as they exercised, it detected the presence of a human finger and could even follow its path in a V or L shape. This sensitivity suggests that the device could become the basis for a touchless interface through which someone could communicate with a computer, according to the researchers. What’s more, the sensor’s electrical response to moisture can be tuned or adjusted, much like the signals sent out by human neurons — potentially allowing it to learn via artificial intelligence, they say. 

The authors acknowledge funding from the Fujian Science and Technology Innovation Laboratory for Optoelectronic Information of China, Fujian Institute of Research on the Structure of Matter, Chinese Academy of Sciences, the Natural Science Foundation of Fujian Province, and the National Natural Science Foundation of China.

Here’s a link to and a citation for the paper,

A Camel Nose-Inspired Highly Durable Neuromorphic Humidity Sensor with Water Source Locating Capability by Caicong Li, Jie Liu, Hailong Peng, Yuan Sui, Jian Song, Yang Liu, Wei Huang, Xiaowei Chen, Jinghui Shen, Yao Ling, Chongyu Huang, Youwei Hong, and Weiguo Huang. ACS Nano 2022, 16, 1, 1511–1522 DOI: https://doi.org/10.1021/acsnano.1c10004 Publication Date:December 15, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Entropic bonding for nanoparticle crystals

A January 19, 2022 University of Michigan news release (also on EurekAlert) is written in a Q&A (question and answer style) not usually seen on news releases, Note: Links have been removed),

Turns out entropy binds nanoparticles a lot like electrons bind chemical crystals

ANN ARBOR—Entropy, a physical property often explained as “disorder,” is revealed as a creator of order with a new bonding theory developed at the University of Michigan and published in the Proceedings of the National Academy of Sciences [PNAS]. 

Engineers dream of using nanoparticles to build designer materials, and the new theory can help guide efforts to make nanoparticles assemble into useful structures. The theory explains earlier results exploring the formation of crystal structures by space-restricted nanoparticles, enabling entropy to be quantified and harnessed in future efforts. 

And curiously, the set of equations that govern nanoparticle interactions due to entropy mirror those that describe chemical bonding. Sharon Glotzer, the Anthony C. Lembke Department Chair of Chemical Engineering, and Thi Vo, a postdoctoral researcher in chemical engineering, answered some questions about their new theory.

What is entropic bonding?

Glotzer: Entropic bonding is a way of explaining how nanoparticles interact to form crystal structures. It’s analogous to the chemical bonds formed by atoms. But unlike atoms, there aren’t electron interactions holding these nanoparticles together. Instead, the attraction arises because of entropy. 

Oftentimes, entropy is associated with disorder, but it’s really about options. When nanoparticles are crowded together and options are limited, it turns out that the most likely arrangement of nanoparticles can be a particular crystal structure. That structure gives the system the most options, and thus the highest entropy. Large entropic forces arise when the particles become close to one another. 

By doing the most extensive studies of particle shapes and the crystals they form, my group found that as you change the shape, you change the directionality of those entropic forces that guide the formation of these crystal structures. That directionality simulates a bond, and since it’s driven by entropy, we call it entropic bonding.

Why is this important?

Glotzer: Entropy’s contribution to creating order is often overlooked when designing nanoparticles for self-assembly, but that’s a mistake. If entropy is helping your system organize itself, you may not need to engineer explicit attraction between particles—for example, using DNA or other sticky molecules—with as strong an interaction as you thought. With our new theory, we can calculate the strength of those entropic bonds.

While we’ve known that entropic interactions can be directional like bonds, our breakthrough is that we can describe those bonds with a theory that line-for-line matches the theory that you would write down for electron interactions in actual chemical bonds. That’s profound. I’m amazed that it’s even possible to do that. Mathematically speaking, it puts chemical bonds and entropic bonds on the same footing. This is both fundamentally important for our understanding of matter and practically important for making new materials.

Electrons are the key to those chemical equations though. How did you do this when no particles mediate the interactions between your nanoparticles?

Glotzer: Entropy is related to the free space in the system, but for years I didn’t know how to count that space. Thi’s big insight was that we could count that space using fictitious point particles. And that gave us the mathematical analogue of the electrons.

Vo: The pseudoparticles move around the system and fill in the spaces that are hard for another nanoparticle to fill—we call this the excluded volume around each nanoparticle. As the nanoparticles become more ordered, the excluded volume around them becomes smaller, and the concentration of pseudoparticles in those regions increases. The entropic bonds are where that concentration is highest. 

In crowded conditions, the entropy lost by increasing the order is outweighed by the entropy gained by shrinking the excluded volume. As a result, the configuration with the highest entropy will be the one where pseudoparticles occupy the least space.

The research is funded by the Simons Foundation, Office of Naval Research, and the Office of the Undersecretary of Defense for Research and Engineering. It relied on the computing resources of the National Science Foundation’s Extreme Science and Engineering Discovery Environment. Glotzer is also the John Werner Cahn Distinguished University Professor of Engineering, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and a professor of material science and engineering, macromolecular science and engineering, and physics at U-M.

Here’s a link to and a citation for the paper,

A theory of entropic bonding by Thi Vo and Sharon C. Glotzer. PNAS January 25, 2022 119 (4) e2116414119 DOI: https://doi.org/10.1073/pnas.2116414119

This paper is behind a paywall.