Monthly Archives: March 2022

A Tootsie Roll® sensor

Caption: An electrode made with a molded Tootsie Roll® and aluminum tubes can help monitor ovulation status and kidney health. Credit: Adapted from ACS Applied Materials & Interfaces 2021, DOI: 10.1021/acsami.1c11306

That really is a flattened Tootsie Roll® as a September 26, 2021 American Chemical Society (ACS) news release (on EurekAlert) notes,

Single-use diagnostic tests often aren’t practical for health professionals or patients in resource-limited areas, where cost and waste disposal are big concerns. So, researchers reporting in ACS Applied Materials & Interfaces have turned to a surprising material, Tootsie Roll® candy, to develop an inexpensive and low-waste device. The candy was used as an electrode, the part of the sensor that detects salt and electrolyte levels in saliva, to monitor ovulation status or kidney health. 

Disposable test strips have advanced the speed and accuracy of at-home health monitoring. For example, ovulation predictor kits measure luteinizing hormone levels, and there are test strips that measure creatinine levels for patients with chronic kidney disease. However, their costs add up quickly and, between the packaging and the strips themselves, there’s a lot of waste that needs to be disposed of. Previous researchers have indicated that simple measurements of a person’s salivary salt and electrolyte content could be appropriate for managing some conditions. So, Beelee Chua and Donghyun Lee wanted to repurpose unconventional and widely available materials, including electrically conductive soft candies, into an easily accessible, low-waste sensor that could simply be licked by patients to analyze their saliva. 

To make the prototype sensor, the researchers first flattened a Tootsie Roll® and pressed crevices into its surface in a crosshatched pattern to hold the saliva sample. Then, they inserted two thin, reusable aluminum tubes, which acted as electrical contacts, connecting the candy electrode into a circuit with a current source and an output voltage detector. In preliminary tests, the device could measure salt levels that were physiologically relevant for health monitoring in a salt-water solution and artificial saliva. For example, when covered in diluted artificial saliva, the sensor could reliably measure a change in voltage low enough to detect the 10-30% drop in salts that occurs when a person ovulates. While the maximum salt content in the artificial saliva samples was similar to that of a healthy adult, the researchers used calculations to estimate that conductivities three times higher, which signal a problem with the kidneys, would be within the measurable range of the device. Although testing with real human samples is still needed, the researchers say that using soft candy as electrodes opens up the possibility for low-waste, inexpensive electrochemical sensors and circuits in the future.

The authors acknowledge funding from the National Research Foundation of Korea.

Here’s a link to and a citation for the paper,

Soft Candy as an Electronic Material Suitable for Salivary Conductivity-Based Medical Diagnostics in Resource-Scarce Clinical Settings by Donghyun Lee and Beelee Chua. ACS Appl. Mater. Interfaces 2021, 13, 37, 43984–43992 DOI: https://doi.org/10.1021/acsami.1c11306 Publication Date:September 10, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Cooling down your electronics

A September 20, 2021 news item on phys.org announces research investigating the heating of electronics at the nanoscale,

A team of physicists at CU Boulder [University of Colorado at Boulder] has solved the mystery behind a perplexing phenomenon in the nano realm: why some ultra-small heat sources cool down faster if you pack them closer together. The findings, published today in the journal Proceedings of the National Academy of Sciences (PNAS), could one day help the tech industry design faster electronic devices that overheat less.

A September 20, 2021 UC Boulder news release (also on EurekAlert) by Daniel Strain, which originated the news item, delves further into the topic of heat and electronics (Note: Links have been removed),

“Often, heat is a challenging consideration in designing electronics. You build a device then discover that it’s heating up faster than desired,” said study co-author Joshua Knobloch, postdoctoral research associate at JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST). “Our goal is to understand the fundamental physics involved so we can engineer future devices to efficiently manage the flow of heat.”

The research began with an unexplained observation: In 2015, researchers led by physicists Margaret Murnane and Henry Kapteyn at JILA were experimenting with bars of metal that were many times thinner than the width of a human hair on a silicon base. When they heated those bars up with a laser, something strange occurred.

“They behaved very counterintuitively,” Knobloch said. “These nano-scale heat sources do not usually dissipate heat efficiently. But if you pack them close together, they cool down much more quickly.”

Now, the researchers know why it happens.

In the new study, they used computer-based simulations to track the passage of heat from their nano-sized bars. They discovered that when they placed the heat sources close together, the vibrations of energy they produced began to bounce off each other, scattering heat away and cooling the bars down.

The group’s results highlight a major challenge in designing the next generation of tiny devices, such as microprocessors or quantum computer chips: When you shrink down to very small scales, heat does not always behave the way you think it should.

Atom by atom

The transmission of heat in devices matters, the researchers added. Even minute defects in the design of electronics like computer chips can allow temperature to build up, adding wear and tear to a device. As tech companies strive to produce smaller and smaller electronics, they’ll need to pay more attention than ever before to phonons—vibrations of atoms that carry heat in solids.

“Heat flow involves very complex processes, making it hard to control,” Knobloch said. “But if we can understand how phonons behave on the small scale, then we can tailor their transport, allowing us to build more efficient devices.”

To do just that, Murnane and Kapteyn and their team of experimental physicists joined forces with a group of theorists led by Mahmoud Hussein, professor in the Ann and H.J. Smead Department of Aerospace Engineering Sciences. His group specializes in simulating, or modeling, the motion of phonons.

“At the atomic scale, the very nature of heat transfer emerges in a new light,” said Hussein who also has a courtesy appointment in the Department of Physics.

The researchers, essentially, recreated their experiment from several years before, but this time, entirely on a computer. They modeled a series of silicon bars, laid side by side like the slats in a train track and heated them up.

The simulations were so detailed, Knobloch said, that the team could follow the behavior of each and every atom in the model—millions of them in all—from start to finish.

“We were really pushing the limits of memory of the Summit Supercomputer at CU Boulder,” he said.

Directing heat

The technique paid off. The researchers found, for example, that when they spaced their silicon bars far enough apart, heat tended to escape away from those materials in a predictable way. The energy leaked from the bars and into the material below them, dissipating in every direction.

When the bars got closer together, however, something else happened. As the heat from those sources scattered, it effectively forced that energy to flow more intensely away from the sources—like a crowd of people in a stadium jostling against each other and eventually leaping out of the exit. The team denoted this phenomenon “directional thermal channeling.”

“This phenomenon increases the transport of heat down into the substrate and away from the heat sources,” Knobloch said.

The researchers suspect that engineers could one day tap into this unusual behavior to gain a better handle on how heat flows in small electronics—directing that energy along a desired path, instead of letting it run wild and free.

For now, the researchers see the latest study as what scientists from different disciplines can do when they work together.

“This project was such an exciting collaboration between science and engineering—where advanced computational analysis methods developed by Mahmoud’s group were critical for understanding new materials behavior uncovered earlier by our group using new extreme ultraviolet quantum light sources,” said Murnane, also a professor of physics.

Here’s a link to and a citation for the paper,

Directional thermal channeling: A phenomenon triggered by tight packing of heat sources by Hossein Honarvar, Joshua L. Knobloch, Travis D. Frazer, Begoña Abad, Brendan McBennett, Mahmoud I. Hussein, Henry C. Kapteyn, Margaret M. Murnane, and Jorge N. Hernandez-Charpak. PNAS October 5, 2021 118 (40) e2109056118; DOI: https://doi.org/10.1073/pnas.2109056118

This paper is behind a paywall.

Vaccine as a salad

A research project into growing vaccines in edible plants has been funded at the University of California at Riverside (UCR) according a September 16, 2021 news item on Nanowerk,

The future of vaccines may look more like eating a salad than getting a shot in the arm. UC Riverside scientists are studying whether they can turn edible plants like lettuce into mRNA vaccine factories.

Messenger RNA or mRNA technology, used in COVID-19 vaccines, works by teaching our cells to recognize and protect us against infectious diseases.

One of the challenges with this new technology is that it must be kept cold to maintain stability during transport and storage. If this new project is successful, plant-based mRNA vaccines — which can be eaten — could overcome this challenge with the ability to be stored at room temperature.

The project’s goals, made possible by a $500,000 grant from the National Science Foundation, are threefold: showing that DNA containing the mRNA vaccines can be successfully delivered into the part of plant cells where it will replicate, demonstrating the plants can produce enough mRNA to rival a traditional shot, and finally, determining the right dosage.

Caption: Chloroplasts (magenta) in leaves expressing a green fluorescent protein. The DNA encoding for the protein was delivered by targeted nanomaterials without mechanical aid by applying a droplet of the nano-formulation to the leaf surface. Credit: Israel Santana/UCR

A September 16, 2021 UC Riverside news release (also on EurekAlert) by Jules Bernstein, which originated the news item, provides more information about the project (Note: A link has been removed),

“Ideally, a single plant would produce enough mRNA to vaccinate a single person,” said Juan Pablo Giraldo, an associate professor in UCR’s Department of Botany and Plant Sciences who is leading the research, done in collaboration with scientists from UC San Diego and Carnegie Mellon University. 

“We are testing this approach with spinach and lettuce and have long-term goals of people growing it in their own gardens,” Giraldo said. “Farmers could also eventually grow entire fields of it.”

Key to making this work are chloroplasts — small organs in plant cells that convert sunlight into energy the plant can use. “They’re tiny, solar-powered factories that produce sugar and other molecules which allow the plant to grow,” Giraldo said. “They’re also an untapped source for making desirable molecules.”

In the past, Giraldo has shown that it is possible for chloroplasts to express genes that aren’t naturally part of the plant. He and his colleagues did this by sending foreign genetic material into plant cells inside a protective casing. Determining the optimal properties of these casings for delivery into plant cells is a specialty of Giraldo’s laboratory. 

For this project Giraldo teamed up with Nicole Steinmetz, a UC San Diego professor of nanoengineering, to utilize nanotechnologies engineered by her team that will deliver genetic material to the chloroplasts. 

“Our idea is to repurpose naturally occurring nanoparticles, namely plant viruses, for gene delivery to the plants,” Steinmetz said. “Some engineering goes into this to make the nanoparticles go to the chloroplasts and also to render them non-infectious toward the plants.”

For Giraldo, the chance to develop this idea with mRNA is the culmination of a dream. “One of the reasons I started working in nanotechnology was so I could apply it to plants and create new technology solutions. Not just for food, but for high-value products as well, like pharmaceuticals,” Giraldo said. 

He is also co-leading a related project using nanomaterials to deliver nitrogen, a fertilizer, directly to chloroplasts, where plants need it most. 

Nitrogen is limited in the environment, but plants need it to grow. Most farmers apply nitrogen to the soil. As a result, roughly half of it ends up in groundwater, contaminating waterways, causing algae blooms, and interacting with other organisms. It also produces nitrous oxide, another pollutant. 

This alternative approach would get nitrogen into the chloroplasts through the leaves and control its release, a much more efficient mode of application that could help farmers and improve the environment. 

The National Science Foundation has granted Giraldo and his colleagues $1.6 million to develop this targeted nitrogen delivery technology.

“I’m very excited about all of this research,” Giraldo said. “I think it could have a huge impact on peoples’ lives.”

I wish the researchers the best of luck.

Pandemic science breakthroughs: combining supercomputing materials with specialized oxides to mimic brain function

This breakthrough in neuromorphic (brainlike) computing is being attributed to the pandemic (COVID-19) according to a September 3, 2021 news item on phys.org,

Isaac Newton’s groundbreaking scientific productivity while isolated from the spread of bubonic plague is legendary. University of California San Diego physicists can now claim a stake in the annals of pandemic-driven science.

A team of UC San Diego [University of California San Diego] researchers and colleagues at Purdue University have now simulated the foundation of new types of artificial intelligence computing devices that mimic brain functions, an achievement that resulted from the COVID-19 pandemic lockdown. By combining new supercomputing materials with specialized oxides, the researchers successfully demonstrated the backbone of networks of circuits and devices that mirror the connectivity of neurons and synapses in biologically based neural networks.

A September 3, 2021 UC San Diego news release by Mario Aguilera, which originated the news item, delves further into the topic of neuromorphic computing,

As bandwidth demands on today’s computers and other devices reach their technological limit, scientists are working towards a future in which new materials can be orchestrated to mimic the speed and precision of animal-like nervous systems. Neuromorphic computing based on quantum materials, which display quantum-mechanics-based properties, allow scientists the ability to move beyond the limits of traditional semiconductor materials. This advanced versatility opens the door to new-age devices that are far more flexible with lower energy demands than today’s devices. Some of these efforts are being led by Department of Physics Assistant Professor Alex Frañó and other researchers in UC San Diego’s Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C), a Department of Energy-supported Energy Frontier Research Center.

“In the past 50 years we’ve seen incredible technological achievements that resulted in computers that were progressively smaller and faster—but even these devices have limits for data storage and energy consumption,” said Frañó, who served as one of the PNAS paper’s authors, along with former UC San Diego chancellor, UC president and physicist Robert Dynes. “Neuromorphic computing is inspired by the emergent processes of the millions of neurons, axons and dendrites that are connected all over our body in an extremely complex nervous system.”

As experimental physicists, Frañó and Dynes are typically busy in their laboratories using state-of-the-art instruments to explore new materials. But with the onset of the pandemic, Frañó and his colleagues were forced into isolation with concerns about how they would keep their research moving forward. They eventually came to the realization that they could advance their science from the perspective of simulations of quantum materials.

“This is a pandemic paper,” said Frañó. “My co-authors and I decided to study this issue from a more theoretical perspective so we sat down and started having weekly (Zoom-based) meetings. Eventually the idea developed and took off.”

The researchers’ innovation was based on joining two types of quantum substances—superconducting materials based on copper oxide and metal insulator transition materials that are based on nickel oxide. They created basic “loop devices” that could be precisely controlled at the nano-scale with helium and hydrogen, reflecting the way neurons and synapses are connected. Adding more of these devices that link and exchange information with each other, the simulations showed that eventually they would allow the creation of an array of networked devices that display emergent properties like an animal’s brain.

Like the brain, neuromorphic devices are being designed to enhance connections that are more important than others, similar to the way synapses weigh more important messages than others.

“It’s surprising that when you start to put in more loops, you start to see behavior that you did not expect,” said Frañó. “From this paper we can imagine doing this with six, 20 or a hundred of these devices—then it gets exponentially rich from there. Ultimately the goal is to create a very large and complex network of these devices that will have the ability to learn and adapt.”

With eased pandemic restrictions, Frañó and his colleagues are back in the laboratory, testing the theoretical simulations described in the PNAS [Proceedings of the National Academy of Sciences] paper with real-world instruments.

Here’s a link to and a citation for the paper,

Low-temperature emergent neuromorphic networks with correlated oxide devices by Uday S. Goteti, Ivan A. Zaluzhnyy, Shriram Ramanathan, Robert C. Dynes, and Alex Frano. PNAS August 31, 2021 118 (35) e2103934118; DOI: https://doi.org/10.1073/pnas.2103934118

This paper is open access.

Sounds of Central African Landscapes; a Cornell (University) Elephant Listening Project

This September 13, 2021 news item about sound recordings taken in a rainforest (on phys.org) is downright fascinating,

More than a million hours of sound recordings are available from the Elephant Listening Project (ELP) in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology—a rainforest residing in the cloud.

ELP researchers, in collaboration with the Wildlife Conservation Society, use remote recording units to capture the entire soundscape of a Congolese rainforest. Their targets are vocalizations from endangered African forest elephants, but they also capture tropical parrots shrieking, chimps chattering and rainfall spattering on leaves to the beat of grumbling thunder.

For someone who suffers from acrophobia (fear of heights), this is a disturbing picture (how tall is that tree? is the rope reinforced? who or what is holding him up? where is the photographer perched?),

Frelcia Bambi is a member of the Congolese team that deploys sound recorders in the rainforest and analyzes the data. Photo by Sebastien Assoignons, courtesy of the Wildlife Conservation Society.

A September 13, 2021 Cornell University (NY state, US) news release by Pat Leonard, which originated the news item, provides more details about the sounds themselves and the Elephant Listening Project,

“Scientists can use these soundscapes to monitor biodiversity,” said ELP director Peter Wrege. “You could measure overall sound levels before, during and after logging operations, for example. Or hone in on certain frequencies where insects may vocalize. Sound is increasingly being used as a conservation tool, especially for establishing the presence or absence of a species.”

For the past four years, 50 tree-mounted recording units have been collecting data continuously, covering a region that encompasses old logging sites, recent logging sites and part of the Nouabalé-Ndoki National Park in the Republic of the Congo. The sensors sometimes capture the booming guns of poachers, alerting rangers who then head out to track down the illegal activity.

But everyday nature lovers can tune in rainforest sounds, too.

“We’ve had requests to use some of the files for meditation or for yoga,” Wrege said. “It is very soothing to listen to rainforest sounds—you hear the sounds of insects, birds, frogs, chimps, wind and rain all blended together.”

But, as Wrege and others have learned, big data can also be a big problem. The Sounds of Central African Landscapes recordings would gobble up nearly 100 terabytes of computer space, and ELP takes in another eight terabytes every four months. But now, Amazon Web Services is storing the jungle sounds for free under its Open Data Sponsorship Program, which preserves valuable scientific data for public use.

This makes it possible for Wrege to share the jungle sounds and easier for users to analyze them with Amazon tools so they don’t have to move the massive files or try to download them.

Searching for individual species amid the wealth of data is a bit more daunting. ELP uses computer algorithms to search through the recordings for elephant sounds. Wrege has created a detector for the sounds of gorillas beating their chests. There are software platforms that help users create detectors for specific sounds, including Raven Pro 1.6, created by the Cornell Lab’s bioacoustics engineers. Wrege says the next iteration, Raven 2.0, will make this process even easier.

Wrege is also eyeing future educational uses for the recordings which he says could help train in-country biologists to not only collect the data but do the analyses. This is gradually happening now in the Republic of the Congo—ELP’s team of Congolese researchers does all the analysis for gunshot detection, though the elephant analyses are still done at ELP.

“We could use these recordings for internships and student training in Congo and other countries where we work, such as Gabon,” Wrege said. “We can excite young people about conservation in Central Africa. It would be a huge benefit to everyone living there.”

To listen or download clips from Sounds of the Central African Landscape, go to ELP’s data page on Amazon Web Services. You’ll need to create an account with AWS (choose the free option). Then sign in with your username and password. Click on the “recordings” item in the list you see, then “wav/” on the next page. From there you can click on any item in the list to play or download clips that are each 1.3 GB and 24 hours long.

Scientists looking to use sounds for research and analysis should start here.

World Conservation Society Forest Elephant Congo [downloaded from https://congo.wcs.org/Wildlife/Forest-Elephant.aspx]

What follows may be a little cynical but I can’t help noticing that this worthwhile and fascinating project will result in more personal and/or professional data for Amazon since you have to sign up even if all you’re doing is reading or listening to a few files that they’ve made available for the general public. In a sense, Amazon gets ‘paid’ when you give up an email address to them. Plus, Amazon gets to look like a good world citizen.

Let’s hope something greater than one company’s reputation as a world citizen comes out of this.

Data Meditation and three roundtables: a collection of Who Cares? March 2022 events

You can find out more about Toronto’s Art/Sci Salon’s Who Cares? speaker series in my February 9, 2022 posting. For this posting, I’m focusing on the upcoming March 2022 events, which are being offered online. From a March 7, 2022 Art/Sci Salon announcement (received via email),

We’re pleased to announce our next two events from our “Who Cares?” Speaker Series

Nous sommes heureux d’annoncer notre deuxième événement de notre “Who Cares?” Série de conferences

March 10 [2022], 2:00-3:00 pm [ET]

Data Meditation: Salvatore Iaconesi and Oriana Persico

HER – She Loves Data 

Nuovo Abitare

Join us for a discussion about questions like: 

Why does data have to be an extractive process?

What can we learn about ourselves through the data we generate everyday?

How can we use them as an expressive form to represent ourselves?

Data Meditations is the first ritual designed with the new approach of HER: She Loves Data, which addresses data as existential and cultural phenomena, and the need of creating experience (contemporary rituals) that allow societies and individuals to come together around data generating meaning, new forms of solidarity, empathy, interconnection and knowledge.

Rejoignez-nous pour une discussion basée sur des questions telles que : 

Pourquoi les données doivent-elles être un processus d’extraction ?

Que pouvons-nous apprendre par rapport à nous, grâce aux données que nous générons chaque jour ?

Comment pouvons-nous les utiliser comme une forme expressive pour nous représenter ?

Data Méditations est le premier rituel conçu avec la nouvelle approche de HER [elle] : She loves Data , qui parle des données en tant que phénomènes existentiels et culturels , mais également , la nécessité de créer des expériences [ rituels contemporains ] qui permettent aux sociétés et aux individus de se réunir autour de données générant du sens , de nouvelles formes de solidarité , empathie ,  d’interconnexion et de connaissance. 

Register HERE/Inscrivez-vous ici

[Beyond triage and data culture roundtable]

March 11, 5:00-7:00 pm [ET]

Maria Antonia Gonzalez-Valerio,
Professor of Philosophy and Literature, UNAM, Mexico City.
Sharmistha Mishra,
Infectious Disease Physician and Mathematical Modeller, St Michael’s Hospital
Madhur Anand,
Ecologist, School of Environmental Sciences, University of Guelph
Salvatore Iaconesi and Oriana Persico,
Independent Artists, HER, She Loves Data

One lesson we have learnt in the past two years is that the pandemic has not single-handedly created a global health crisis, but has exacerbated and made visible one that was already in progress. The roots of this crisis are as cultural as they are economic and environmental.  Among the factors contributing to the crisis is a dominant orientation towards healthcare that privileges a narrow focus on data-centered technological fixes and praises the potentials of technological delegation. An unsustainable system has culminated in the passive acceptance and even the cold justification of triage as an inevitable evil in a time of crisis and scarcity.

What transdisciplinary practices can help ameliorate the atomizing pitfalls of turning the patient into data?
How can discriminatory practices such as triage, exclusion based on race, gender, and class, vaccine hoarding etc.. be addressed and reversed?
What strategies can we devise to foster genuine transdisciplinary approaches and move beyond the silo effects of specialization, address current uncritical trends towards technological delegation, and restore the centrality of human relations in healthcare delivery?

L’une des leçons que nous avons apprises au cours des deux dernières années est que la pandémie n’a pas créé à elle seule une crise sanitaire mondiale, mais qu’elle en a exacerbé et rendu visible une qui était déjà en cours. Les racines de cette crise sont aussi bien culturelles qu’économiques et environnementales. Parmi les facteurs qui contribuent à la crise figure une orientation dominante en matière de soins de santé, qui privilégie une vision étroite des solutions technologiques centrées sur les données et fait l’éloge du potentiel de la délégation technologique. Un système non durable a abouti à l’acceptation passive et même à la justification froide du triage comme un mal inévitable en temps de crise et de pénurie.

Quelles pratiques transdisciplinaires peuvent contribuer à améliorer les pièges de l’atomisation qui consiste à transformer le patient en données ?
Comment les pratiques discriminatoires telles que le triage, l’exclusion fondée sur la race, le sexe et la classe sociale, la thésaurisation des vaccins, etc. peuvent-elles être abordées et inversées ?
Quelles stratégies pouvons-nous concevoir pour favoriser de véritables approches transdisciplinaires et dépasser les effets de silo de la spécialisation, pour faire face aux tendances actuelles non critiques à la délégation technologique, et pour restaurer la centralité des relations humaines dans la prestation des soins de santé ?

Register HERE/Inscrivez-vous ici

We wish to thank/ nous [sic] the generous support of the Social Science and Humanities Research Council of Canada, New College at the University of Toronto and The Faculty of Liberal Arts and Professional Studies at York University; the Centre for Feminist Research, Sensorium Centre for Digital Arts and Technology, The Canadian Language Museum, the Departments of English and the School of Gender and Women’s Studies at York University; the D.G. Ivey Library and the Institute for the History and Philosophy of Science and Technology at the University of Toronto; We also wish to thank the support of The Fields Institute for Research in Mathematical Sciences

There are two more online March 2022 roundtable discussions, from the Who Cares? events webpage,

2. Friday, March 18 – 6:00 to 8:00 pm [ET]
Critical care and sustainable care

Suvendrini Lena, MD, Playwright and Neurologist at CAMH and Centre for Headache, Women’s College Hospital, Toronto
Adriana Ieraci, Roboticist and PhD candidate in Computer Science, Ryerson University
Lucia Gagliese – Pain Aging Lab, York University

(online)

3. Friday, March 25 – 5:00 to 7:00 pm [ET]
Building communities and technologies of care

Camille Baker, University for the Creative Arts, School of Film media and Performing Arts
Alanna Kibbe, independent artist, Toronto

(online)

There will also be some events in April 2022 and there are two ongoing exhibitions, which you can see here.

Fishes ‘talk’ and ‘sing’

This posting started out with two items and then, it became more. If you’re interested in marine bioacoustics especially the work that’s been announced in the last four months, read on.

Fish songs

This item is about how fish sounds (songs) signify successful coral reef restoration got coverage on BBC (British Broadcasting Corporation), CBC (Canadian Broadcasting Corporation) and elsewhere. This video is courtesy of the Guardian Newspaper,

Whoops and grunts: ‘bizarre’ fish songs raise hopes for coral reef recovery https://www.theguardian.com/environme…

A December 8, 2021 University of Exeter press release (also on EurekAlert) explains why the sounds give hope (Note: Links have been removed),

Newly discovered fish songs demonstrate reef restoration success

Whoops, croaks, growls, raspberries and foghorns are among the sounds that demonstrate the success of a coral reef restoration project.

Thousands of square metres of coral are being grown on previously destroyed reefs in Indonesia, but previously it was unclear whether these new corals would revive the entire reef ecosystem.

Now a new study, led by researchers from the University of Exeter and the University of Bristol, finds a heathy, diverse soundscape on the restored reefs.

These sounds – many of which have never been recorded before – can be used alongside visual observations to monitor these vital ecosystems.

“Restoration projects can be successful at growing coral, but that’s only part of the ecosystem,” said lead author Dr Tim Lamont, of the University of Exeter and the Mars Coral Reef Restoration Project, which is restoring the reefs in central Indonesia.

“This study provides exciting evidence that restoration really works for the other reef creatures too – by listening to the reefs, we’ve documented the return of a diverse range of animals.”

Professor Steve Simpson, from the University of Bristol, added: “Some of the sounds we recorded are really bizarre, and new to us as scientists.  

“We have a lot still to learn about what they all mean and the animals that are making them. But for now, it’s amazing to be able to hear the ecosystem come back to life.”

The soundscapes of the restored reefs are not identical to those of existing healthy reefs – but the diversity of sounds is similar, suggesting a healthy and functioning ecosystem.

There were significantly more fish sounds recorded on both healthy and restored reefs than on degraded reefs.

This study used acoustic recordings taken in 2018 and 2019 as part of the monitoring programme for the Mars Coral Reef Restoration Project.

The results are positive for the project’s approach, in which hexagonal metal frames called ‘Reef Stars’ are seeded with coral and laid over a large area. The Reef Stars stabilise loose rubble and kickstart rapid coral growth, leading to the revival of the wider ecosystem.  

Mochyudho Prasetya, of the Mars Coral Reef Restoration Project, said: “We have been restoring and monitoring these reefs here in Indonesia for many years. Now it is amazing to see more and more evidence that our work is helping the reefs come back to life.”

Professor David Smith, Chief Marine Scientist for Mars Incorporated, added: “When the soundscape comes back like this, the reef has a better chance of becoming self-sustaining because those sounds attract more animals that maintain and diversify reef populations.”

Asked about the multiple threats facing coral reefs, including climate change and water pollution, Dr Lamont said: “If we don’t address these wider problems, conditions for reefs will get more and more hostile, and eventually restoration will become impossible.

“Our study shows that reef restoration can really work, but it’s only part of a solution that must also include rapid action on climate change and other threats to reefs worldwide.”

The study was partly funded by the Natural Environment Research Council and the Swiss National Science Foundation.

Here’s a link to and a citation for the paper,

The sound of recovery: Coral reef restoration success is detectable in the soundscape by Timothy A. C. Lamont, Ben Williams, Lucille Chapuis, Mochyudho E. Prasetya, Marie J. Seraphim, Harry R. Harding, Eleanor B. May, Noel Janetski, Jamaluddin Jompa, David J. Smith, Andrew N. Radford, Stephen D. Simpson. Journal of Applied Ecology DOI: https://doi.org/10.1111/1365-2664.14089 First published: 07 December 2021

This paper is open access.

You can find the MARS Coral Reef Restoration Project here.

Fish talk

There is one item here. This research from Cornell University also features the sounds fish make. It’s no surprise given the attention being given to sound that the Cornell Lab of Ornithology is involved. In addition to the lab’s main focus, birds, many other animal sounds are gathered too.

A January 27, 2022 Cornell University news release (also on EurekAlert) describes ‘fish talk’,

There’s a whole lot of talking going on beneath the waves. A new study from Cornell University finds that fish are far more likely to communicate with sound than generally thought—and some fish have been doing this for at least 155 million years. These findings were just published in the journal Ichthyology & Herpetology.

“We’ve known for a long time that some fish make sounds,” said lead author Aaron Rice, a researcher at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology [emphasis mine]. “But fish sounds were always perceived as rare oddities. We wanted to know if these were one-offs or if there was a broader pattern for acoustic communication in fishes.”

The authors looked at a branch of fishes called the ray-finned fishes. These are vertebrates (having a backbone) that comprise 99% of the world’s known species of fishes. They found 175 families that contain two-thirds of fish species that do, or are likely to, communicate with sound. By examining the fish family tree, study authors found that sound was so important, it evolved at least 33 separate times over millions of years.

“Thanks to decades of basic research on the evolutionary relationships of fishes, we can now explore many questions about how different functions and behaviors evolved in the approximately 35,000 known species of fishes,” said co-author William E. Bemis ’76, Cornell professor of ecology and evolutionary biology in the College of Agriculture and Life Sciences. “We’re getting away from a strictly human-centric way of thinking. What we learn could give us some insight on the drivers of sound communication and how it continues to evolve.”

The scientists used three sources of information: existing recordings and scientific papers describing fish sounds; the known anatomy of a fish—whether they have the right tools for making sounds, such as certain bones, an air bladder, and sound-specific muscles; and references in 19th century literature before underwater microphones were invented.
 
“Sound communication is often overlooked within fishes, yet they make up more than half of all living vertebrate species,” said Andrew Bass, co-lead author and the Horace White Professor of Neurobiology and Behavior in the College of Arts and Sciences. “They’ve probably been overlooked because fishes are not easily heard or seen, and the science of underwater acoustic communication has primarily focused on whales and dolphins. But fishes have voices, too!”
 
Listen:

Oyster ToadfishWilliam Tavolga, Macaulay Library

Longspine squirrelfishHoward Winn, Macaulay Library 

Banded drumDonald Batz, Macaulay Library

Midshipman, Andrew Bass, Macaulay Library

What are the fish talking about? Pretty much the same things we all talk about—sex and food. Rice says the fish are either trying to attract a mate, defend a food source or territory, or let others know where they are. Even some of the common names for fish are based on the sounds they make, such as grunts, croakers, hog fish, squeaking catfish, trumpeters, and many more.
 
Rice intends to keep tracking the discovery of sound in fish species and add them to his growing database (see supplemental material, Table S1)—a project he began 20 years ago with study co-authors Ingrid Kaatz ’85, MS ’92, and Philip Lobel, a professor of biology at Boston University. Their collaboration has continued and expanded since Rice came to Cornell.
 
“This introduces sound communication to so many more groups than we ever thought,” said Rice. “Fish do everything. They breathe air, they fly, they eat anything and everything—at this point, nothing would surprise me about fishes and the sounds that they can make.”

The research was partly funded by the National Science Foundation, the U.S. Bureau of Ocean Energy Management, the Tontogany Creek Fund, and the Cornell Lab of Ornithology.

I’ve embedded one of the audio files, Oyster Toadfish (William Tavolga) here,

Here’s a link to and a citation for the paper,

Evolutionary Patterns in Sound Production across Fishes by Aaron N. Rice, Stacy C. Farina, Andrea J. Makowski, Ingrid M. Kaatz, Phillip S. Lobel, William E. Bemis, Andrew H. Bass. Ichthyology & Herpetology, 110(1):1-12 (2022) DOI: https://doi.org/10.1643/i2020172 20 January 2022

This paper is open access.

Marine sound libraries

Thanks to Aly Laube’s March 2, 2022 article on the DailyHive.com, I learned of Kieran Cox’s work at the University of Victoria and FishSounds (Note: Links have been removed),

Fish have conversations and a group of researchers made a website to document them. 

It’s so much fun to peruse and probably the good news you need. Listen to a Bocon toadfish “boop” or this sablefish tick, which is slightly creepier, but still pretty cool. This streaked gurnard can growl, and this grumpy Atlantic cod can grunt.

The technical term for “fishy conversations” is “marine bioacoustics,” which is what Kieran Cox specializes in. They can be used to track, monitor, and learn more about aquatic wildlife.

The doctor of marine biology at the University of Victoria co-authored an article about fish sounds in Reviews in Fish Biology and Fisheries called “A Quantitative Inventory of Global Soniferous Fish Diversity.”

It presents findings from his process, helping create FishSounds.net. He and his team looked over over 3,000 documents from 834 studies to put together the library of 989 fish species.

A March 2, 2022 University of Victoria news release provides more information about the work and the research team (Note: Links have been removed),

Fascinating soundscapes exist beneath rivers, lakes and oceans. An unexpected sound source are fish making their own unique and entertaining noise from guttural grunts to high-pitched squeals. Underwater noise is a vital part of marine ecosystems, and thanks to almost 150 years of researchers documenting those sounds we know hundreds of fish species contribute their distinctive sounds. Although fish are the largest and most diverse group of sound-producing vertebrates in water, there was no record of which fish species make sound and the sounds they produce. For the very first time, there is now a digital place where that data can be freely accessed or contributed to, an online repository, a global inventory of fish sounds.

Kieran Cox co-authored the published article about fish sounds and their value in Reviews in Fish Biology and Fisheries while completing his Ph.D in marine biology at the University of Victoria. Cox recently began a Liber Ero post-doctoral collaboration with Francis Juanes that aims to integrate marine bioacoustics into the conservation of Canada’s oceans. Liber Ero program is devoted to promoting applied and evidence-based conservation in Canada.

The international group of researchers includes UVic, the University of Florida, Universidade de São Paulo, and Marine Environmental Research Infrastructure for Data Integration and Application Network (MERIDIAN) [emphasis mine] have launched the first ever, dedicated website focused on fish and their sounds: FishSounds.net. …

According to Cox, “This data is absolutely critical to our efforts. Without it, we were having a one-sided conversation about how noise impacts marine life. Now we can better understand the contributions fish make to soundscapes and examine which species may be most impacted by noise pollution.” Cox, an avid scuba diver, remembers his first dive when the distinct sound of parrotfish eating coral resonated over the reef, “It’s thrilling to know we are now archiving vital ecological information and making it freely available to the public, I feel like my younger self would be very proud of this effort.” …

There’s also a March 2, 2022 University of Florida news release on EurekAlert about FishSounds which adds more details about the work (Note: Links have been removed),

Cows moo. Wolves howl. Birds tweet. And fish, it turns out, make all sorts of ruckus.

“People are often surprised to learn that fish make sounds,” said Audrey Looby, a doctoral candidate at the University of Florida. “But you could make the case that they are as important for understanding fish as bird sounds are for studying birds.”

The sounds of many animals are well documented. Go online, and you’ll find plenty of resources for bird calls and whale songs. However, a global library for fish sounds used to be unheard of.

That’s why Looby, University of Victoria collaborator Kieran Cox and an international team of researchers created FishSounds.net, the first online, interactive fish sounds repository of its kind.

“There’s no standard system yet for naming fish sounds, so our project uses the sound names researchers have come up with,” Looby said. “And who doesn’t love a fish that boops?”

The library’s creators hope to add a feature that will allow people to submit their own fish sound recordings. Other interactive features, such as a world map with clickable fish sound data points, are also in the works.

Fish make sound in many ways. Some, like the toadfish, have evolved organs or other structures in their bodies that produce what scientists call active sounds. Other fish produce incidental or passive sounds, like chewing or splashing, but even passive sounds can still convey information.

Scientists think fish evolved to make sound because sound is an effective way to communicate underwater. Sound travels faster under water than it does through air, and in low visibility settings, it ensures the message still reaches an audience.

“Fish sounds contain a lot of important information,” said Looby, who is pursuing a doctorate in fisheries and aquatic sciences at the UF/IFAS College of Agricultural and Life Sciences. “Fish may communicate about territory, predators, food and reproduction. And when we can match fish sounds to fish species, their sounds are a kind of calling card that can tell us what kinds of fish are in an area and what they are doing.”

Knowing the location and movements of fish species is critical for environmental monitoring, fisheries management and conservation efforts. In the future, marine, estuarine or freshwater ecologists could use hydrophones — special underwater microphones — to gather data on fish species’ whereabouts. But first, they will need to be able to identify which fish they are hearing, and that’s where the fish sounds database can assist.

FishSounds.net emerged from the research team’s efforts to gather and review the existing scientific literature on fish sounds. An article synthesizing that literature has just been published in Reviews in Fish Biology and Fisheries.

In the article, the researchers reviewed scientific reports of fish sounds going back almost 150 years. They found that a little under a thousand fish species are known to make active sounds, and several hundred species were studied for their passive sounds. However, these are probably both underestimates, Cox explained.

Here’s a link to and a citation for the paper,

A quantitative inventory of global soniferous fish diversity by Audrey Looby, Kieran Cox, Santiago Bravo, Rodney Rountree, Francis Juanes, Laura K. Reynolds & Charles W. Martin. Reviews in Fish Biology and Fisheries (2022) DOI: https://doi.org/10.1007/s11160-022-09702-1 Published 18 February 2022

This paper is behind a paywall.

Finally, there’s GLUBS. A comprehensive February 27, 2022 Rockefeller University news release on EurekAlert announces a proposal for the Global Library of Underwater Biological Sounds (GLUBS), Note 1: Links have been removed; Note 2: If you’re interested in the topic, I recommend reading either the original February 27, 2022 Rockefeller University news release with its numerous embedded images, audio files, and links to marine audio libraries,

Of the roughly 250,000 known marine species, scientists think all ~126 marine mammals emit sounds – the ‘thwop’, ‘muah’, and ‘boop’s of a humpback whale, for example, or the boing of a minke whale. Audible too are at least 100 invertebrates, 1,000 of the world’s 34,000 known fish species, and likely many thousands more.

Now a team of 17 experts from nine countries has set a goal [emphasis mine] of gathering on a single platform huge collections of aquatic life’s tell-tale sounds, and expanding it using new enabling technologies – from highly sophisticated ocean hydrophones and artificial intelligence learning systems to phone apps and underwater GoPros used by citizen scientists.

The Global Library of Underwater Biological Sounds, “GLUBS,” will underpin a novel non-invasive, affordable way for scientists to listen in on life in marine, brackish and freshwaters, monitor its changing diversity, distribution and abundance, and identify new species. Using the acoustic properties of underwater soundscapes can also characterize an ecosystem’s type and condition.

“A database of unidentified sounds is, in some ways, as important as one for known sources,” the scientists say. “As the field progresses, new unidentified sounds will be collected, and more unidentified sounds can be matched to species.”

This can be “particularly important for high-biodiversity systems such as coral reefs, where even a short recording can pick up multiple animal sounds.”

Existing libraries of undersea sounds (several of which are listed with hyperlinks below) “often focus on species of interest that are targeted by the host institute’s researchers,” the paper says, and several are nationally-focussed. Few libraries identify what is missing from their catalogs, which the proposed global library would.

“A global reference library of underwater biological sounds would increase the ability for more researchers in more locations to broaden the number of species assessed within their datasets and to identify sounds they personally do not recognize,” the paper says.

The scientists note that listening to the sea has revealed great whales swimming in unexpected places, new species and new sounds.

With sound, “biologically important areas can be mapped; spawning grounds, essential fish habitat, and migration pathways can be delineated…These and other questions can be queried on broader scales if we have a global catalog of sounds.”

Meanwhile, comparing sounds from a single species across broad areas and times helps understand their diversity and evolution.

Numerous marine animals are cosmopolitan, the paper says, “either as wide-roaming individuals, such as the great whales, or as broadly distributed species, such as many fishes.”

Fin whale calls, for example, can differ among populations in the Northern and Southern hemispheres, and over seasons, whereas the call of pilot whales are similar worldwide, even though their home ranges do not (or no longer) cross the equator.

Some fishes even seem to develop geographic ‘dialects’ or completely different signal structures among regions, several of which evolve over time.

Madagascar’s skunk anemonefish … , for example, produces different agonistic (fight-related) sounds than those in Indonesia, while differences in the song of humpback whales have been observed across ocean basins.

Phone apps, underwater GoPros and citizen science

Much like BirdNet and FrogID, a library of underwater biological sounds and automated detection algorithms would be useful not only for the scientific, industry and marine management communities but also for users with a general interest.

“Acoustic technology has reached the stage where a hydrophone can be connected to a mobile phone so people can listen to fishes and whales in the rivers and seas around them. Therefore, sound libraries are becoming invaluable to citizen scientists and the general public,” the paper adds.

And citizen scientists could be of great help to the library by uploading the results of, for example, the River Listening app (www.riverlistening.com), which encourages the public to listen to and record fish sounds in rivers and coastal waters.

Low-cost hydrophones and recording systems (such as the Hydromoth) are increasingly available and waterproof recreational recording systems (such as GoPros) can also collect underwater biological sounds.

Here’s a link to and a citation for the paper,

Sounding the Call for a Global Library of Underwater Biological Sounds by Miles J. G. Parsons, Tzu-Hao Lin, T. Aran Mooney, Christine Erbe, Francis Juanes, Marc Lammers, Songhai Li, Simon Linke, Audrey Looby, Sophie L. Nedelec, Ilse Van Opzeeland, Craig Radford, Aaron N. Rice, Laela Sayigh, Jenni Stanley, Edward Urban and Lucia Di Iorio. Front. Ecol. Evol., 08 February 2022 DOI: https://doi.org/10.3389/fevo.2022.810156 Published: 08 February 2022.

This paper appears to be open access.