Category Archives: Music

Sonifying the protein folding process

A sonification and animation of a state machine based on a simple lattice model used by Martin Gruebele to teach concepts of protein-folding dynamics. First posted January 25, 2022 on YouTube.

A February 17, 2022 news item on ScienceDaily announces the work featured in the animation above,

Musicians are helping scientists analyze data, teach protein folding and make new discoveries through sound.

A team of researchers at the University of Illinois Urbana-Champaign is using sonification — the use of sound to convey information — to depict biochemical processes and better understand how they happen.

Music professor and composer Stephen Andrew Taylor; chemistry professor and biophysicist Martin Gruebele; and Illinois music and computer science alumna, composer and software designer Carla Scaletti formed the Biophysics Sonification Group, which has been meeting weekly on Zoom since the beginning of the pandemic. The group has experimented with using sonification in Gruebele’s research into the physical mechanisms of protein folding, and its work recently allowed Gruebele to make a new discovery about the ways a protein can fold.

A February 17, 2022 University of Illinois at Urbana-Champaign news release (also on EurekAlert), which originated the news item, describes how the group sonifies and animates the protein folding process (Note: Links have been removed),

Taylor’s musical compositions have long been influenced by science, and recent works represent scientific data and biological processes. Gruebele also is a musician who built his own pipe organ that he plays and uses to compose music. The idea of working together on sonification struck a chord with them, and they’ve been collaborating for several years. Through her company, Symbolic Sound Corp., Scaletti develops a digital audio software and hardware sound design system called Kyma that is used by many musicians and researchers, including Taylor.

Scaletti created an animated visualization paired with sound that illustrated a simplified protein-folding process, and Gruebele and Taylor used it to introduce key concepts of the process to students and gauge whether it helped with their understanding. They found that sonification complemented and reinforced the visualizations and that, even for experts, it helped increase intuition for how proteins fold and misfold over time. The Biophysics Sonification Group – which also includes chemistry professor Taras Pogorelov, former chemistry graduate student (now alumna) Meredith Rickard, composer and pipe organist Franz Danksagmüller of the Lübeck Academy of Music in Germany, and Illinois electrical and computer engineering alumnus Kurt Hebel of Symbolic Sound – described using sonification in teaching in the Journal of Chemical Education.

Gruebele and his research team use supercomputers to run simulations of proteins folding into a specific structure, a process that relies on a complex pattern of many interactions. The simulation reveals the multiple pathways the proteins take as they fold, and also shows when they misfold or get stuck in the wrong shape – something thought to be related to a number of diseases such as Alzheimer’s and Parkinson’s.

The researchers use the simulation data to gain insight into the process. Nearly all data analysis is done visually, Gruebele said, but massive amounts of data generated by the computer simulations – representing hundreds of thousands of variables and millions of moments in time – can be very difficult to visualize.

“In digital audio, everything is a stream of numbers, so actually it’s quite natural to take a stream of numbers and listen to it as if it’s a digital recording,” Scaletti said. “You can hear things that you wouldn’t see if you looked at a list of numbers and you also wouldn’t see if you looked at an animation. There’s so much going on that there could be something that’s hidden, but you could bring it out with sound.”

For example, when the protein folds, it is surrounded by water molecules that are critical to the process. Gruebele said he wants to know when a water molecule touches and solvates a protein, but “there are 50,000 water molecules moving around, and only one or two are doing a critical thing. It’s impossible to see.” However, if a splashy sound occurred every time a water molecule touched a specific amino acid, that would be easy to hear.

Taylor and Scaletti use various audio-mapping techniques to link aspects of proteins to sound parameters such as pitch, timbre, loudness and pan position. For example, Taylor’s work uses different pitches and instruments to represent each unique amino acid, as well as their hydrophobic or hydrophilic qualities.

“I’ve been trying to draw on our instinctive responses to sound as much as possible,” Taylor said. “Beethoven said, ‘The deeper the stream, the deeper the tone.’ We expect an elephant to make a low sound because it’s big, and we expect a sparrow to make a high sound because it’s small. Certain kinds of mappings are built into us. As much as possible, we can take advantage of those and that helps to communicate more effectively.”

The highly developed instincts of musicians help in creating the best tool to use sound to convey information, Taylor said.

“It’s a new way of showing how music and sound can help us understand the world. Musicians have an important role to play,” he said. “It’s helped me become a better musician, in thinking about sound in different ways and thinking how sound can link to the world in different ways, even the world of the very small.”

Here’s a link to and a citation for the paper,

Sonification-Enhanced Lattice Model Animations for Teaching the Protein Folding Reaction by Carla Scaletti, Meredith M. Rickard, Kurt J. Hebel, Taras V. Pogorelov, Stephen A. Taylor, and Martin Gruebele. J. Chem. Educ. 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.jchemed.1c00857 Publication Date:February 16, 2022 © 2022 American Chemical Society and Division of Chemical Education, Inc.

This paper is behind a paywall.

For more about sonification and proteins, there’s my March 31, 2022 posting, Classical music makes protein songs easier listening.

Classical music makes protein songs easier listening

Caption: This audio is oxytocin receptor protein music using the Fantasy Impromptu guided algorithm. Credit: Chen et al. / Heliyon

A September 29, 2021 news item on ScienceDaily describes new research into music as a means of communicating science,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Scientists (Peng Zhang, Postdoctoral Researcher in Computational Biology at The Rockefeller University, and Yuzong Chen, Professor of Pharmacy at National University of Singapore [NUS]) wrote a September 29, 2021 essay for The Conversation about their protein songs (Note: Links have been removed),

There are many surprising analogies between proteins, the basic building blocks of life, and musical notation. These analogies can be used not only to help advance research, but also to make the complexity of proteins accessible to the public.

We’re computational biologists who believe that hearing the sound of life at the molecular level could help inspire people to learn more about biology and the computational sciences. While creating music based on proteins isn’t new, different musical styles and composition algorithms had yet to be explored. So we led a team of high school students and other scholars to figure out how to create classical music from proteins.

The musical analogies of proteins

Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet.

A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation.

Protein chains can also fold into wavy and curved patterns with ups, downs, turns and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs.

Protein-to-music algorithms can thus map the structural and physiochemical features of a string of amino acids onto the musical features of a string of notes.

Enhancing the musicality of protein mapping

Protein-to-music mapping can be fine-tuned by basing it on the features of a specific music style. This enhances musicality, or the melodiousness of the song, when converting amino acid properties, such as sequence patterns and variations, into analogous musical properties, like pitch, note lengths and chords.

For our study, we specifically selected 19th-century Romantic period classical piano music, which includes composers like Chopin and Schubert, as a guide because it typically spans a wide range of notes with more complex features such as chromaticism, like playing both white and black keys on a piano in order of pitch, and chords. Music from this period also tends to have lighter and more graceful and emotive melodies. Songs are usually homophonic, meaning they follow a central melody with accompaniment. These features allowed us to test out a greater range of notes in our protein-to-music mapping algorithm. In this case, we chose to analyze features of Chopin’s “Fantaisie-Impromptu” to guide our development of the program.

If you have the time, I recommend reading the essay in its entirety and listening to the embedded audio files.

The September 29, 2021 Cell Press news release on EurekAlert repeats some of the same material but is worth reading on its own merits,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Creating unique melodies from proteins is achieved by using a protein-to-music algorithm. This algorithm incorporates specific elements of proteins—like the size and position of amino acids—and maps them to various musical elements to create an auditory “blueprint” of the proteins’ structure.

“Existing protein music has mostly been designed by simple mapping of certain amino acid patterns to fundamental musical features such as pitches and note lengths, but they do not map well to more complex musical features such as rhythm and harmony,” says senior author Yu Zong Chen, a professor in the Department of Pharmacy at National University of Singapore. “By focusing on a music style, we can guide more complex mappings of combinations of amino acid patterns with various musical features.”

For their experiment, researchers analyzed the pitch, length, octaves, chords, dynamics, and main theme of four pieces from the mid-1800s Romantic era of classical music. These pieces, including Fantasie-Impromptu from Chopin and Wanderer Fantasy from Franz Schubert, were selected to represent the notable Fantasy-Impromptu genre that emerged during that time.

“We chose the specific music style of a Fantasy-Impromptu as it is characterized by freedom of expression, which we felt would complement how proteins regulate much of our bodily functions, including our moods,” says co-author Peng Zhang (@zhangpeng1202), a post-doctoral fellow at the Rockefeller University

Likewise, several of the proteins in the study were chosen for their similarities to the key attributes of the Fantasy-Impromptu style. Most of the 18 proteins tested regulate functions including human emotion, cognition, sensation, or performance which the authors say connect to the emotional and expressive of the genre.

Then, they mapped 104 structural, physicochemical, and binding amino acid properties of those proteins to the six musical features. “We screened the quantitative profile of each amino acid property against the quantized values of the different musical features to find the optimal mapped pairings. For example, we mapped the size of amino acid to note length, so that having a larger amino acid size corresponds to a shorter note length,” says Chen.

Across all the proteins tested, the researchers found that the musicality of the proteins was significantly improved. In particular, the protein receptor for oxytocin (OXTR) was judged to have one of the greatest increases in musicality when using the genre-guided algorithm, compared to an earlier version of the protein-to-music algorithm.

“The oxytocin receptor protein generated our favorite song,” says Zhang. “This protein sequence produced an identifiable main theme that repeats in rhythm throughout the piece, as well as some interesting motifs and patterns that recur independent of our algorithm. There were also some pleasant harmonic progressions; for example, many of the seventh chords naturally resolve.”

The authors do note, however, that while the guided algorithm increased the overall musicality of the protein songs, there is still much progress to be made before it resembles true human music.

“We believe a next step is to explore more music styles and more complex combinations of amino acid properties for enhanced musicality and novel music pieces. Another next step, a very important step, is to apply artificial intelligence to jointly learn complex amino acid properties and their combinations with respect to the features of various music styles for creating protein music of enhanced musicality,” says Chen.

###

Research supported by the National Key R&D Program of China, the National Natural Science Foundation of China, and Singapore Academic Funds.

Here’s a link to and a citation for the paper,

Protein Music of Enhanced Musicality by Music Style Guided Exploration of Diverse Amino Acid Properties by Nicole WanNi Tay, Fanxi Liu, Chaoxin Wang, Hui Zhang, Peng Zhang, Yu Zong Chen. Heliyon, 2021 DOI: https:// doi.org/10.1016/j.heliyon.2021.e07933 Published; September 29, 2021

This paper appears to be open access.

Ever heard a bird singing and wondered what kind of bird?

The Cornell University Lab of Ornithology’s sound recognition feature in its Merlin birding app(lication) can answer that question for you according to a July 14, 2021 article by Steven Melendez for Fast Company (Note: Links have been removed),

The lab recently upgraded its Merlin smartphone app, designed for both new and experienced birdwatchers. It now features an AI-infused “Sound ID” feature that can capture bird sounds and compare them to crowdsourced samples to figure out just what bird is making that sound. … people have used it to identify more than 1 million birds. New user counts are also up 58% since the two weeks before launch, and up 44% over the same period last year, according to Drew Weber, Merlin’s project coordinator.

Even when it’s listening to bird sounds, the app still relies on recent advances in image recognition, says project research engineer Grant Van Horn. …, it actually transforms the sound into a visual graph called a spectrogram, similar to what you might see in an audio editing program. Then, it analyzes that spectrogram to look for similarities to known bird calls, which come from the Cornell Lab’s eBird citizen science project.

There’s more detail about Merlin in Marc Devokaitis’ June 23, 2021 article for the Cornell Chronicle,

… Merlin can recognize the sounds of more than 400 species from the U.S. and Canada, with that number set to expand rapidly in future updates.

As Merlin listens, it uses artificial intelligence (AI) technology to identify each species, displaying in real time a list and photos of the birds that are singing or calling.

Automatic song ID has been a dream for decades, but analyzing sound has always been extremely difficult. The breakthrough came when researchers, including Merlin lead researcher Grant Van Horn, began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature.

“Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude [volume], frequency [pitch] and duration of the sound,” Van Horn said. “So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID.”

Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database.

“Thousands of sound recordings train Merlin to recognize each bird species, and more than a billion bird observations in eBird tell Merlin which birds are likely to be present at a particular place and time,” said Drew Weber, Merlin project coordinator. “Having this incredibly robust bird dataset – and feeding that into faster and more powerful machine-learning tools – enables Merlin to identify birds by sound now, when doing so seemed like a daunting challenge just a few years ago.”

The Merlin Bird ID app with the new Sound ID feature is available for free on iOS and Android devices. Click here to download the Merlin Bird ID app and follow the prompts. If you already have Merlin installed on your phone, tap “Get Sound ID.”

Do take a look at Devokaitis’ June 23, 2021 article for more about how the Merlin app provides four ways to identify birds.

For anyone who likes to listen to the news, there’s an August 26, 2021 podcast (The Warblers by Birds Canada) featuring Drew Weber, Merlin project coordinator, and Jody Allair, Birds Canada Director of Community Engagement, discussing Merlin,

It’s a dream come true – there’s finally an app for identifying bird sounds. In the next episode of The Warblers podcast, we’ll explore the Merlin Bird ID app’s new Sound ID feature and how artificial intelligence is redefining birding. We talk with Drew Weber and Jody Allair and go deep into the implications and opportunities that this technology will bring for birds, and new as well as experienced birders.

The Warblers is hosted by Andrea Gress and Andrés Jiménez.

Sounds of Central African Landscapes; a Cornell (University) Elephant Listening Project

This September 13, 2021 news item about sound recordings taken in a rainforest (on phys.org) is downright fascinating,

More than a million hours of sound recordings are available from the Elephant Listening Project (ELP) in the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology—a rainforest residing in the cloud.

ELP researchers, in collaboration with the Wildlife Conservation Society, use remote recording units to capture the entire soundscape of a Congolese rainforest. Their targets are vocalizations from endangered African forest elephants, but they also capture tropical parrots shrieking, chimps chattering and rainfall spattering on leaves to the beat of grumbling thunder.

For someone who suffers from acrophobia (fear of heights), this is a disturbing picture (how tall is that tree? is the rope reinforced? who or what is holding him up? where is the photographer perched?),

Frelcia Bambi is a member of the Congolese team that deploys sound recorders in the rainforest and analyzes the data. Photo by Sebastien Assoignons, courtesy of the Wildlife Conservation Society.

A September 13, 2021 Cornell University (NY state, US) news release by Pat Leonard, which originated the news item, provides more details about the sounds themselves and the Elephant Listening Project,

“Scientists can use these soundscapes to monitor biodiversity,” said ELP director Peter Wrege. “You could measure overall sound levels before, during and after logging operations, for example. Or hone in on certain frequencies where insects may vocalize. Sound is increasingly being used as a conservation tool, especially for establishing the presence or absence of a species.”

For the past four years, 50 tree-mounted recording units have been collecting data continuously, covering a region that encompasses old logging sites, recent logging sites and part of the Nouabalé-Ndoki National Park in the Republic of the Congo. The sensors sometimes capture the booming guns of poachers, alerting rangers who then head out to track down the illegal activity.

But everyday nature lovers can tune in rainforest sounds, too.

“We’ve had requests to use some of the files for meditation or for yoga,” Wrege said. “It is very soothing to listen to rainforest sounds—you hear the sounds of insects, birds, frogs, chimps, wind and rain all blended together.”

But, as Wrege and others have learned, big data can also be a big problem. The Sounds of Central African Landscapes recordings would gobble up nearly 100 terabytes of computer space, and ELP takes in another eight terabytes every four months. But now, Amazon Web Services is storing the jungle sounds for free under its Open Data Sponsorship Program, which preserves valuable scientific data for public use.

This makes it possible for Wrege to share the jungle sounds and easier for users to analyze them with Amazon tools so they don’t have to move the massive files or try to download them.

Searching for individual species amid the wealth of data is a bit more daunting. ELP uses computer algorithms to search through the recordings for elephant sounds. Wrege has created a detector for the sounds of gorillas beating their chests. There are software platforms that help users create detectors for specific sounds, including Raven Pro 1.6, created by the Cornell Lab’s bioacoustics engineers. Wrege says the next iteration, Raven 2.0, will make this process even easier.

Wrege is also eyeing future educational uses for the recordings which he says could help train in-country biologists to not only collect the data but do the analyses. This is gradually happening now in the Republic of the Congo—ELP’s team of Congolese researchers does all the analysis for gunshot detection, though the elephant analyses are still done at ELP.

“We could use these recordings for internships and student training in Congo and other countries where we work, such as Gabon,” Wrege said. “We can excite young people about conservation in Central Africa. It would be a huge benefit to everyone living there.”

To listen or download clips from Sounds of the Central African Landscape, go to ELP’s data page on Amazon Web Services. You’ll need to create an account with AWS (choose the free option). Then sign in with your username and password. Click on the “recordings” item in the list you see, then “wav/” on the next page. From there you can click on any item in the list to play or download clips that are each 1.3 GB and 24 hours long.

Scientists looking to use sounds for research and analysis should start here.

World Conservation Society Forest Elephant Congo [downloaded from https://congo.wcs.org/Wildlife/Forest-Elephant.aspx]

What follows may be a little cynical but I can’t help noticing that this worthwhile and fascinating project will result in more personal and/or professional data for Amazon since you have to sign up even if all you’re doing is reading or listening to a few files that they’ve made available for the general public. In a sense, Amazon gets ‘paid’ when you give up an email address to them. Plus, Amazon gets to look like a good world citizen.

Let’s hope something greater than one company’s reputation as a world citizen comes out of this.

Fishes ‘talk’ and ‘sing’

This posting started out with two items and then, it became more. If you’re interested in marine bioacoustics especially the work that’s been announced in the last four months, read on.

Fish songs

This item is about how fish sounds (songs) signify successful coral reef restoration got coverage on BBC (British Broadcasting Corporation), CBC (Canadian Broadcasting Corporation) and elsewhere. This video is courtesy of the Guardian Newspaper,

Whoops and grunts: ‘bizarre’ fish songs raise hopes for coral reef recovery https://www.theguardian.com/environme…

A December 8, 2021 University of Exeter press release (also on EurekAlert) explains why the sounds give hope (Note: Links have been removed),

Newly discovered fish songs demonstrate reef restoration success

Whoops, croaks, growls, raspberries and foghorns are among the sounds that demonstrate the success of a coral reef restoration project.

Thousands of square metres of coral are being grown on previously destroyed reefs in Indonesia, but previously it was unclear whether these new corals would revive the entire reef ecosystem.

Now a new study, led by researchers from the University of Exeter and the University of Bristol, finds a heathy, diverse soundscape on the restored reefs.

These sounds – many of which have never been recorded before – can be used alongside visual observations to monitor these vital ecosystems.

“Restoration projects can be successful at growing coral, but that’s only part of the ecosystem,” said lead author Dr Tim Lamont, of the University of Exeter and the Mars Coral Reef Restoration Project, which is restoring the reefs in central Indonesia.

“This study provides exciting evidence that restoration really works for the other reef creatures too – by listening to the reefs, we’ve documented the return of a diverse range of animals.”

Professor Steve Simpson, from the University of Bristol, added: “Some of the sounds we recorded are really bizarre, and new to us as scientists.  

“We have a lot still to learn about what they all mean and the animals that are making them. But for now, it’s amazing to be able to hear the ecosystem come back to life.”

The soundscapes of the restored reefs are not identical to those of existing healthy reefs – but the diversity of sounds is similar, suggesting a healthy and functioning ecosystem.

There were significantly more fish sounds recorded on both healthy and restored reefs than on degraded reefs.

This study used acoustic recordings taken in 2018 and 2019 as part of the monitoring programme for the Mars Coral Reef Restoration Project.

The results are positive for the project’s approach, in which hexagonal metal frames called ‘Reef Stars’ are seeded with coral and laid over a large area. The Reef Stars stabilise loose rubble and kickstart rapid coral growth, leading to the revival of the wider ecosystem.  

Mochyudho Prasetya, of the Mars Coral Reef Restoration Project, said: “We have been restoring and monitoring these reefs here in Indonesia for many years. Now it is amazing to see more and more evidence that our work is helping the reefs come back to life.”

Professor David Smith, Chief Marine Scientist for Mars Incorporated, added: “When the soundscape comes back like this, the reef has a better chance of becoming self-sustaining because those sounds attract more animals that maintain and diversify reef populations.”

Asked about the multiple threats facing coral reefs, including climate change and water pollution, Dr Lamont said: “If we don’t address these wider problems, conditions for reefs will get more and more hostile, and eventually restoration will become impossible.

“Our study shows that reef restoration can really work, but it’s only part of a solution that must also include rapid action on climate change and other threats to reefs worldwide.”

The study was partly funded by the Natural Environment Research Council and the Swiss National Science Foundation.

Here’s a link to and a citation for the paper,

The sound of recovery: Coral reef restoration success is detectable in the soundscape by Timothy A. C. Lamont, Ben Williams, Lucille Chapuis, Mochyudho E. Prasetya, Marie J. Seraphim, Harry R. Harding, Eleanor B. May, Noel Janetski, Jamaluddin Jompa, David J. Smith, Andrew N. Radford, Stephen D. Simpson. Journal of Applied Ecology DOI: https://doi.org/10.1111/1365-2664.14089 First published: 07 December 2021

This paper is open access.

You can find the MARS Coral Reef Restoration Project here.

Fish talk

There is one item here. This research from Cornell University also features the sounds fish make. It’s no surprise given the attention being given to sound that the Cornell Lab of Ornithology is involved. In addition to the lab’s main focus, birds, many other animal sounds are gathered too.

A January 27, 2022 Cornell University news release (also on EurekAlert) describes ‘fish talk’,

There’s a whole lot of talking going on beneath the waves. A new study from Cornell University finds that fish are far more likely to communicate with sound than generally thought—and some fish have been doing this for at least 155 million years. These findings were just published in the journal Ichthyology & Herpetology.

“We’ve known for a long time that some fish make sounds,” said lead author Aaron Rice, a researcher at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology [emphasis mine]. “But fish sounds were always perceived as rare oddities. We wanted to know if these were one-offs or if there was a broader pattern for acoustic communication in fishes.”

The authors looked at a branch of fishes called the ray-finned fishes. These are vertebrates (having a backbone) that comprise 99% of the world’s known species of fishes. They found 175 families that contain two-thirds of fish species that do, or are likely to, communicate with sound. By examining the fish family tree, study authors found that sound was so important, it evolved at least 33 separate times over millions of years.

“Thanks to decades of basic research on the evolutionary relationships of fishes, we can now explore many questions about how different functions and behaviors evolved in the approximately 35,000 known species of fishes,” said co-author William E. Bemis ’76, Cornell professor of ecology and evolutionary biology in the College of Agriculture and Life Sciences. “We’re getting away from a strictly human-centric way of thinking. What we learn could give us some insight on the drivers of sound communication and how it continues to evolve.”

The scientists used three sources of information: existing recordings and scientific papers describing fish sounds; the known anatomy of a fish—whether they have the right tools for making sounds, such as certain bones, an air bladder, and sound-specific muscles; and references in 19th century literature before underwater microphones were invented.
 
“Sound communication is often overlooked within fishes, yet they make up more than half of all living vertebrate species,” said Andrew Bass, co-lead author and the Horace White Professor of Neurobiology and Behavior in the College of Arts and Sciences. “They’ve probably been overlooked because fishes are not easily heard or seen, and the science of underwater acoustic communication has primarily focused on whales and dolphins. But fishes have voices, too!”
 
Listen:

Oyster ToadfishWilliam Tavolga, Macaulay Library

Longspine squirrelfishHoward Winn, Macaulay Library 

Banded drumDonald Batz, Macaulay Library

Midshipman, Andrew Bass, Macaulay Library

What are the fish talking about? Pretty much the same things we all talk about—sex and food. Rice says the fish are either trying to attract a mate, defend a food source or territory, or let others know where they are. Even some of the common names for fish are based on the sounds they make, such as grunts, croakers, hog fish, squeaking catfish, trumpeters, and many more.
 
Rice intends to keep tracking the discovery of sound in fish species and add them to his growing database (see supplemental material, Table S1)—a project he began 20 years ago with study co-authors Ingrid Kaatz ’85, MS ’92, and Philip Lobel, a professor of biology at Boston University. Their collaboration has continued and expanded since Rice came to Cornell.
 
“This introduces sound communication to so many more groups than we ever thought,” said Rice. “Fish do everything. They breathe air, they fly, they eat anything and everything—at this point, nothing would surprise me about fishes and the sounds that they can make.”

The research was partly funded by the National Science Foundation, the U.S. Bureau of Ocean Energy Management, the Tontogany Creek Fund, and the Cornell Lab of Ornithology.

I’ve embedded one of the audio files, Oyster Toadfish (William Tavolga) here,

Here’s a link to and a citation for the paper,

Evolutionary Patterns in Sound Production across Fishes by Aaron N. Rice, Stacy C. Farina, Andrea J. Makowski, Ingrid M. Kaatz, Phillip S. Lobel, William E. Bemis, Andrew H. Bass. Ichthyology & Herpetology, 110(1):1-12 (2022) DOI: https://doi.org/10.1643/i2020172 20 January 2022

This paper is open access.

Marine sound libraries

Thanks to Aly Laube’s March 2, 2022 article on the DailyHive.com, I learned of Kieran Cox’s work at the University of Victoria and FishSounds (Note: Links have been removed),

Fish have conversations and a group of researchers made a website to document them. 

It’s so much fun to peruse and probably the good news you need. Listen to a Bocon toadfish “boop” or this sablefish tick, which is slightly creepier, but still pretty cool. This streaked gurnard can growl, and this grumpy Atlantic cod can grunt.

The technical term for “fishy conversations” is “marine bioacoustics,” which is what Kieran Cox specializes in. They can be used to track, monitor, and learn more about aquatic wildlife.

The doctor of marine biology at the University of Victoria co-authored an article about fish sounds in Reviews in Fish Biology and Fisheries called “A Quantitative Inventory of Global Soniferous Fish Diversity.”

It presents findings from his process, helping create FishSounds.net. He and his team looked over over 3,000 documents from 834 studies to put together the library of 989 fish species.

A March 2, 2022 University of Victoria news release provides more information about the work and the research team (Note: Links have been removed),

Fascinating soundscapes exist beneath rivers, lakes and oceans. An unexpected sound source are fish making their own unique and entertaining noise from guttural grunts to high-pitched squeals. Underwater noise is a vital part of marine ecosystems, and thanks to almost 150 years of researchers documenting those sounds we know hundreds of fish species contribute their distinctive sounds. Although fish are the largest and most diverse group of sound-producing vertebrates in water, there was no record of which fish species make sound and the sounds they produce. For the very first time, there is now a digital place where that data can be freely accessed or contributed to, an online repository, a global inventory of fish sounds.

Kieran Cox co-authored the published article about fish sounds and their value in Reviews in Fish Biology and Fisheries while completing his Ph.D in marine biology at the University of Victoria. Cox recently began a Liber Ero post-doctoral collaboration with Francis Juanes that aims to integrate marine bioacoustics into the conservation of Canada’s oceans. Liber Ero program is devoted to promoting applied and evidence-based conservation in Canada.

The international group of researchers includes UVic, the University of Florida, Universidade de São Paulo, and Marine Environmental Research Infrastructure for Data Integration and Application Network (MERIDIAN) [emphasis mine] have launched the first ever, dedicated website focused on fish and their sounds: FishSounds.net. …

According to Cox, “This data is absolutely critical to our efforts. Without it, we were having a one-sided conversation about how noise impacts marine life. Now we can better understand the contributions fish make to soundscapes and examine which species may be most impacted by noise pollution.” Cox, an avid scuba diver, remembers his first dive when the distinct sound of parrotfish eating coral resonated over the reef, “It’s thrilling to know we are now archiving vital ecological information and making it freely available to the public, I feel like my younger self would be very proud of this effort.” …

There’s also a March 2, 2022 University of Florida news release on EurekAlert about FishSounds which adds more details about the work (Note: Links have been removed),

Cows moo. Wolves howl. Birds tweet. And fish, it turns out, make all sorts of ruckus.

“People are often surprised to learn that fish make sounds,” said Audrey Looby, a doctoral candidate at the University of Florida. “But you could make the case that they are as important for understanding fish as bird sounds are for studying birds.”

The sounds of many animals are well documented. Go online, and you’ll find plenty of resources for bird calls and whale songs. However, a global library for fish sounds used to be unheard of.

That’s why Looby, University of Victoria collaborator Kieran Cox and an international team of researchers created FishSounds.net, the first online, interactive fish sounds repository of its kind.

“There’s no standard system yet for naming fish sounds, so our project uses the sound names researchers have come up with,” Looby said. “And who doesn’t love a fish that boops?”

The library’s creators hope to add a feature that will allow people to submit their own fish sound recordings. Other interactive features, such as a world map with clickable fish sound data points, are also in the works.

Fish make sound in many ways. Some, like the toadfish, have evolved organs or other structures in their bodies that produce what scientists call active sounds. Other fish produce incidental or passive sounds, like chewing or splashing, but even passive sounds can still convey information.

Scientists think fish evolved to make sound because sound is an effective way to communicate underwater. Sound travels faster under water than it does through air, and in low visibility settings, it ensures the message still reaches an audience.

“Fish sounds contain a lot of important information,” said Looby, who is pursuing a doctorate in fisheries and aquatic sciences at the UF/IFAS College of Agricultural and Life Sciences. “Fish may communicate about territory, predators, food and reproduction. And when we can match fish sounds to fish species, their sounds are a kind of calling card that can tell us what kinds of fish are in an area and what they are doing.”

Knowing the location and movements of fish species is critical for environmental monitoring, fisheries management and conservation efforts. In the future, marine, estuarine or freshwater ecologists could use hydrophones — special underwater microphones — to gather data on fish species’ whereabouts. But first, they will need to be able to identify which fish they are hearing, and that’s where the fish sounds database can assist.

FishSounds.net emerged from the research team’s efforts to gather and review the existing scientific literature on fish sounds. An article synthesizing that literature has just been published in Reviews in Fish Biology and Fisheries.

In the article, the researchers reviewed scientific reports of fish sounds going back almost 150 years. They found that a little under a thousand fish species are known to make active sounds, and several hundred species were studied for their passive sounds. However, these are probably both underestimates, Cox explained.

Here’s a link to and a citation for the paper,

A quantitative inventory of global soniferous fish diversity by Audrey Looby, Kieran Cox, Santiago Bravo, Rodney Rountree, Francis Juanes, Laura K. Reynolds & Charles W. Martin. Reviews in Fish Biology and Fisheries (2022) DOI: https://doi.org/10.1007/s11160-022-09702-1 Published 18 February 2022

This paper is behind a paywall.

Finally, there’s GLUBS. A comprehensive February 27, 2022 Rockefeller University news release on EurekAlert announces a proposal for the Global Library of Underwater Biological Sounds (GLUBS), Note 1: Links have been removed; Note 2: If you’re interested in the topic, I recommend reading either the original February 27, 2022 Rockefeller University news release with its numerous embedded images, audio files, and links to marine audio libraries,

Of the roughly 250,000 known marine species, scientists think all ~126 marine mammals emit sounds – the ‘thwop’, ‘muah’, and ‘boop’s of a humpback whale, for example, or the boing of a minke whale. Audible too are at least 100 invertebrates, 1,000 of the world’s 34,000 known fish species, and likely many thousands more.

Now a team of 17 experts from nine countries has set a goal [emphasis mine] of gathering on a single platform huge collections of aquatic life’s tell-tale sounds, and expanding it using new enabling technologies – from highly sophisticated ocean hydrophones and artificial intelligence learning systems to phone apps and underwater GoPros used by citizen scientists.

The Global Library of Underwater Biological Sounds, “GLUBS,” will underpin a novel non-invasive, affordable way for scientists to listen in on life in marine, brackish and freshwaters, monitor its changing diversity, distribution and abundance, and identify new species. Using the acoustic properties of underwater soundscapes can also characterize an ecosystem’s type and condition.

“A database of unidentified sounds is, in some ways, as important as one for known sources,” the scientists say. “As the field progresses, new unidentified sounds will be collected, and more unidentified sounds can be matched to species.”

This can be “particularly important for high-biodiversity systems such as coral reefs, where even a short recording can pick up multiple animal sounds.”

Existing libraries of undersea sounds (several of which are listed with hyperlinks below) “often focus on species of interest that are targeted by the host institute’s researchers,” the paper says, and several are nationally-focussed. Few libraries identify what is missing from their catalogs, which the proposed global library would.

“A global reference library of underwater biological sounds would increase the ability for more researchers in more locations to broaden the number of species assessed within their datasets and to identify sounds they personally do not recognize,” the paper says.

The scientists note that listening to the sea has revealed great whales swimming in unexpected places, new species and new sounds.

With sound, “biologically important areas can be mapped; spawning grounds, essential fish habitat, and migration pathways can be delineated…These and other questions can be queried on broader scales if we have a global catalog of sounds.”

Meanwhile, comparing sounds from a single species across broad areas and times helps understand their diversity and evolution.

Numerous marine animals are cosmopolitan, the paper says, “either as wide-roaming individuals, such as the great whales, or as broadly distributed species, such as many fishes.”

Fin whale calls, for example, can differ among populations in the Northern and Southern hemispheres, and over seasons, whereas the call of pilot whales are similar worldwide, even though their home ranges do not (or no longer) cross the equator.

Some fishes even seem to develop geographic ‘dialects’ or completely different signal structures among regions, several of which evolve over time.

Madagascar’s skunk anemonefish … , for example, produces different agonistic (fight-related) sounds than those in Indonesia, while differences in the song of humpback whales have been observed across ocean basins.

Phone apps, underwater GoPros and citizen science

Much like BirdNet and FrogID, a library of underwater biological sounds and automated detection algorithms would be useful not only for the scientific, industry and marine management communities but also for users with a general interest.

“Acoustic technology has reached the stage where a hydrophone can be connected to a mobile phone so people can listen to fishes and whales in the rivers and seas around them. Therefore, sound libraries are becoming invaluable to citizen scientists and the general public,” the paper adds.

And citizen scientists could be of great help to the library by uploading the results of, for example, the River Listening app (www.riverlistening.com), which encourages the public to listen to and record fish sounds in rivers and coastal waters.

Low-cost hydrophones and recording systems (such as the Hydromoth) are increasingly available and waterproof recreational recording systems (such as GoPros) can also collect underwater biological sounds.

Here’s a link to and a citation for the paper,

Sounding the Call for a Global Library of Underwater Biological Sounds by Miles J. G. Parsons, Tzu-Hao Lin, T. Aran Mooney, Christine Erbe, Francis Juanes, Marc Lammers, Songhai Li, Simon Linke, Audrey Looby, Sophie L. Nedelec, Ilse Van Opzeeland, Craig Radford, Aaron N. Rice, Laela Sayigh, Jenni Stanley, Edward Urban and Lucia Di Iorio. Front. Ecol. Evol., 08 February 2022 DOI: https://doi.org/10.3389/fevo.2022.810156 Published: 08 February 2022.

This paper appears to be open access.

Tree music

Hidden Life Radio livestreams music generated from trees (their biodata, that is). Kristin Toussaint in her August 3, 2021 article for Fast Company describes the ‘radio station’, Note: Links have been removed,

Outside of a library in Cambridge, Massachusetts, an over-80-year-old copper beech tree is making music.

As the tree photosynthesizes and absorbs and evaporates water, a solar-powered sensor attached to a leaf measures the micro voltage of all that invisible activity. Sound designer and musician Skooby Laposky assigned a key and note range to those changes in this electric activity, turning the tree’s everyday biological processes into an ethereal song.

That music is available on Hidden Life Radio, an art project by Laposky, with assistance from the Cambridge Department of Public Works Urban Forestry, and funded in part by a grant from the Cambridge Arts Council. Hidden Life Radio also features the musical sounds of two other Cambridge trees: a honey locust and a red oak, both located outside of other Cambridge library branches. The sensors on these trees are solar-powered biodata sonification kits, a technology that has allowed people to turn all sorts of plant activity into music.

… Laposky has created a musical voice for these disappearing trees, and he hopes people tune into Hidden Life Radio and spend time listening to them over time. The music they produce occurs in real time, affected by the weather and whatever the tree is currently doing. Some days they might be silent, especially when there’s been several days without rain, and they’re dehydrated; Laposky is working on adding an archive that includes weather information, so people can go back and hear what the trees sound like on different days, under different conditions. The radio will play 24 hours a day until November, when the leaves will drop—a “natural cycle for the project to end,” Laposky says, “when there aren’t any leaves to connect to anymore.”

The 2021 season is over but you can find an archive of Hidden Life Radio livestreams here. Or, if you happen to be reading this page sometime after January 2022, you can try your luck and click here at Hidden Life Radio livestreams but remember, even if the project has started up again, the tree may not be making music when you check in. So, if you don’t hear anything the first time, try again.

Want to create your own biodata sonification project?

Toussaint’s article sent me on a search for more and I found a website where you can get biodata sonification kits. Sam Cusumano’s electricity for progress website offers lessons, as well as, kits and more.

Sophie Haigney’s February 21, 2020 article for NPR ([US] National Public Radio) highlights other plant music and more ways to tune in to and create it. (h/t Kristin Toussaint)

Jean-Pierre Luminet awarded UNESCO’s Kalinga prize for Popularizing Science

Before getting to the news about Jean-Pierre Luminet, astrophysicist, poet, sculptor, and more, there’s the prize itself.

Established in 1951, a scant five years after UNESCO (United Nations Educational, Scientific and Cultural Organization) was founded in 1945, the Kalinga Prize for the Popularization of Science is the organization’s oldest prize. Here’s more from the UNESCO Kalinga Prize for the Popularization of Science webpage,

The UNESCO Kalinga Prize for the Popularization of Science is an international award to reward exceptional contributions made by individuals in communicating science to society and promoting the popularization of science. It is awarded to persons who have had a distinguished career as writer, editor, lecturer, radio, television, or web programme director, or film producer in helping interpret science, research and technology to the public. UNESCO Kalinga Prize winners know the potential power of science, technology, and research in improving public welfare, enriching the cultural heritage of nations and providing solutions to societal problems on the local, regional and global level.

The UNESCO Kalinga Prize for the Popularization of Science is UNESCO’s oldest prize, created in 1951 following a donation from Mr Bijoyanand Patnaik, Founder and President of the Kalinga Foundation(link is external) Trust in India. Today, the Prize is funded by the Kalinga Foundation Trust(link is external), the Government of the State of Orissa, India(link is external), and the Government of India (Department of Science and Technology(link is external)).

Jean-Pierre Luminet

From the November 4, 2021 UNESCO press release (also received via email),

French scientist and author Jean-Pierre Luminet will be awarded the 2021 UNESCO Kalinga Prize for the Popularization of Science. The prize-giving ceremony will take place online on 5 November as part of the celebration of World Science Day for Peace and Development.

An independent international jury selected Jean-Pierre Luminet recognizing his longstanding commitment to the popularization of science. Mr Luminet is a distinguished astrophysicist and cosmologist who has been promoting the values of scientific research through a wide variety of media: he has created popular science books and novels, beautifully illustrated exhibition catalogues, poetry, audiovisual materials for children and documentaries, notably “Du Big Bang au vivant” with Hubert Reeves. He is also an artist, engraver and sculptor and has collaborated with composers on musicals inspired by the sounds of the Universe.

His publications are model examples for communicating science to the public. Their scientific content is precise, rigorous and always state-of-the-art. He has written seven “scientific novels”, including “Le Secret de Copernic”, published in 2006. His recent book “Le destin de l’univers : trous noirs et énergie sombre”, about black holes and dark energy, was written for the general public and was praised for its outstanding scientific, historical, and literary qualities. Jean-Pierre Luminet’s work has been translated into a many languages including Chinese and Korean.

There is a page for Luminet in both the French language and English language wikipedias. If you have the language skills, you might want to check out the French language essay as I found it to be more stylishly written.

Compare,

De par ses activités de poète, essayiste, romancier et scénariste, dans une œuvre voulant lier science, histoire, musique et art, il est également Officier des Arts et des Lettres.

With,

… Luminet has written fifteen science books,[4] seven historical novels,[4] TV documentaries,[5] and six poetry collections. He is an artist, an engraver, a sculptor, and a musician.

My rough translation of the French,

As a poet, essayaist, novelist, and a screenwriter in a body of work that brings together science, history, music and art, he is truly someone who has enriched the French cultural inheritance (which is what it means to be an Officer of Arts and Letters or Officier des Arts et des Lettres; see English language entry for Ordre des Arts et des Lettres).

In any event, congratulations to M. Luminet.

Finishing Beethoven’s unfinished 10th Symphony

Throughout the project, Beethoven’s genius loomed. Circe Denyer

This is an artificial intelligence (AI) story set to music. Professor Ahmed Elgammal (Director of the Art & AI Lab at Rutgers University located in New Jersey, US) has a September 24, 2021 essay posted on The Conversation (and, then, in the Smithsonian Magazine online) describing the AI project and upcoming album release and performance (Note: A link has been removed),

When Ludwig van Beethoven died in 1827, he was three years removed from the completion of his Ninth Symphony, a work heralded by many as his magnum opus. He had started work on his 10th Symphony but, due to deteriorating health, wasn’t able to make much headway: All he left behind were some musical sketches.

A full recording of Beethoven’s 10th Symphony is set to be released on Oct. 9, 2021, the same day as the world premiere performance scheduled to take place in Bonn, Germany – the culmination of a two-year-plus effort.

These excerpts from the Elgammal’s September 24, 2021 essay on the The Conversation provide a summarized view of events. By the way, this isn’t the first time an attempt has been made to finish Beethoven’s 10th Symphony (Note: Links have been removed),

Around 1817, the Royal Philharmonic Society in London commissioned Beethoven to write his Ninth and 10th symphonies. Written for an orchestra, symphonies often contain four movements: the first is performed at a fast tempo, the second at a slower one, the third at a medium or fast tempo, and the last at a fast tempo.

Beethoven completed his Ninth Symphony in 1824, which concludes with the timeless “Ode to Joy.”

But when it came to the 10th Symphony, Beethoven didn’t leave much behind, other than some musical notes and a handful of ideas he had jotted down.

There have been some past attempts to reconstruct parts of Beethoven’s 10th Symphony. Most famously, in 1988, musicologist Barry Cooper ventured to complete the first and second movements. He wove together 250 bars of music from the sketches to create what was, in his view, a production of the first movement that was faithful to Beethoven’s vision.

Yet the sparseness of Beethoven’s sketches made it impossible for symphony experts to go beyond that first movement.

In early 2019, Dr. Matthias Röder, the director of the Karajan Institute, an organization in Salzburg, Austria, that promotes music technology, contacted me. He explained that he was putting together a team to complete Beethoven’s 10th Symphony in celebration of the composer’s 250th birthday. Aware of my work on AI-generated art, he wanted to know if AI would be able to help fill in the blanks left by Beethoven.

Röder then compiled a team that included Austrian composer Walter Werzowa. Famous for writing Intel’s signature bong jingle, Werzowa was tasked with putting together a new kind of composition that would integrate what Beethoven left behind with what the AI would generate. Mark Gotham, a computational music expert, led the effort to transcribe Beethoven’s sketches and process his entire body of work so the AI could be properly trained.

The team also included Robert Levin, a musicologist at Harvard University who also happens to be an incredible pianist. Levin had previously finished a number of incomplete 18th-century works by Mozart and Johann Sebastian Bach.

… We didn’t have a machine that we could feed sketches to, push a button and have it spit out a symphony. Most AI available at the time couldn’t continue an uncompleted piece of music beyond a few additional seconds.

We would need to push the boundaries of what creative AI could do by teaching the machine Beethoven’s creative process – how he would take a few bars of music and painstakingly develop them into stirring symphonies, quartets and sonatas.

Here’s Elgammal’s description of the difficulties from an AI perspective, from the September 24, 2021 essay (Note: Links have been removed),

First, and most fundamentally, we needed to figure out how to take a short phrase, or even just a motif, and use it to develop a longer, more complicated musical structure, just as Beethoven would have done. For example, the machine had to learn how Beethoven constructed the Fifth Symphony out of a basic four-note motif.

Next, because the continuation of a phrase also needs to follow a certain musical form, whether it’s a scherzo, trio or fugue, the AI needed to learn Beethoven’s process for developing these forms.

The to-do list grew: We had to teach the AI how to take a melodic line and harmonize it. The AI needed to learn how to bridge two sections of music together. And we realized the AI had to be able to compose a coda, which is a segment that brings a section of a piece of music to its conclusion.

Finally, once we had a full composition, the AI was going to have to figure out how to orchestrate it, which involves assigning different instruments for different parts.

And it had to pull off these tasks in the way Beethoven might do so.

The team tested its work, from the September 24, 2021 essay, Note: A link has been removed,

In November 2019, the team met in person again – this time, in Bonn, at the Beethoven House Museum, where the composer was born and raised.

This meeting was the litmus test for determining whether AI could complete this project. We printed musical scores that had been developed by AI and built off the sketches from Beethoven’s 10th. A pianist performed in a small concert hall in the museum before a group of journalists, music scholars and Beethoven experts.

We challenged the audience to determine where Beethoven’s phrases ended and where the AI extrapolation began. They couldn’t.

A few days later, one of these AI-generated scores was played by a string quartet in a news conference. Only those who intimately knew Beethoven’s sketches for the 10th Symphony could determine when the AI-generated parts came in.

The success of these tests told us we were on the right track. But these were just a couple of minutes of music. There was still much more work to do.

There is a preview of the finished 10th symphony,

Beethoven X: The AI Project: III Scherzo. Allegro – Trio (Official Video) | Beethoven Orchestra Bonn

Modern Recordings / BMG present as a foretaste of the album “Beethoven X – The AI Project” (release: 8.10.) the edit of the 3rd movement “Scherzo. Allegro – Trio” as a classical music video. Listen now: https://lnk.to/BeethovenX-Scherzo

Album pre-order link: https://lnk.to/BeethovenX

The Beethoven Orchestra Bonn performing with Dirk Kaftan and Walter Werzowa a great recording of world-premiere Beethoven pieces. Developed by AI and music scientists as well as composers, Beethoven’s once unfinished 10th symphony now surprises with beautiful Beethoven-like harmonics and dynamics.

For anyone who’d like to hear the October 9, 2021 performance, Sharon Kelly included some details in her August 16, 2021 article for DiscoverMusic,

The world premiere of Beethoven’s 10th Symphony on 9 October 2021 at the Telekom Forum in Bonn, performed by the Beethoven Orchestra Bonn conducted by Dirk Kaftan, will be broadcast live and free of charge on MagentaMusik 360.

Sadly, the time is not listed but MagentaMusik 360 is fairly easy to find online.

You can find out more about Professor Elgammal on his Rutgers University profile page. Elgammal has graced this blog before in an August 16, 2019 posting “AI (artificial intelligence) artist got a show at a New York City art gallery“. He’s mentioned in an excerpt about 20% of the way down the page,

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

Finally, thank you to @winsontang whose tweet led me to this story.