Category Archives: data sonification

Listening to a protein fold itself

This May 20, 2024 news item on ScienceDaily announces new work on protein folding that can be heard,

By converting their data into sounds, scientists discovered how hydrogen bonds contribute to the lightning-fast gyrations that transform a string of amino acids into a functional, folded protein. Their report, in the Proceedings of the National Academy of Sciences [PNAS], offers an unprecedented view of the sequence of hydrogen-bonding events that occur when a protein morphs from an unfolded to a folded state.

This video (courtesy of the University of Illinois at Urbana-Champaign) is “A sonification and animation of a state machine based on a simple lattice model used by Martin Gruebele to teach concepts of protein-folding dynamics,” Note: Also embedded in April 1, 2022 posting, “Sonifying the protein folding process,”

The latest work is in a May 20, 2024 University of Illinois at Urbana-Champaign news release (also on EurekAlert) by Diana Yates, which originated the news item. It provides more information about the researchers’ work and about the use of data sonification, Note: Links have been removed,

“A protein must fold properly to become an enzyme or signaling molecule or whatever its function may be — all the many things that proteins do in our bodies,” said University of Illinois Urbana-Champaign chemistry professor Martin Gruebele, who led the new research with composer and software developer Carla Scaletti.  

Misfolded proteins contribute to Alzheimer’s disease, Parkinson’s disease, cystic fibrosis and other disorders. To better understand how this process goes awry, scientists must first determine how a string of amino acids shape-shifts into its final form in the watery environment of the cell. The actual transformations occur very fast, “somewhere between 70 nanoseconds and two microseconds,” Gruebele said.

Hydrogen bonds are relatively weak attractions that align atoms located on different amino acids in the protein. A folding protein will form a series of hydrogen bonds internally and with the water molecules that surround it. In the process, the protein wiggles into countless potential intermediate conformations, sometimes hitting a dead-end and backtracking until it stumbles onto a different path.

See video: Protein Sonification: Hairpin in a trap

The researchers wanted to map the time sequence of hydrogen bonds that occur as the protein folds. But their visualizations could not capture these complex events.

“There are literally tens of thousands of these interactions with water molecules during the short passage between the unfolded and folded state,” Gruebele said.

So the researchers turned to data sonification, a method for converting their molecular data into sounds so that they could “hear” the hydrogen bonds forming. To accomplish this, Scaletti wrote a software program that assigned each hydrogen bond a unique pitch. Molecular simulations generated the essential data, showing where and when two atoms were in the right position in space — and close enough to one another — to hydrogen bond. If the correct conditions for bonding occurred, the software program played a pitch corresponding to that bond. Altogether, the program tracked hundreds of thousands of individual hydrogen-bonding events in sequence.

See video: Using sound to explore hydrogen bond dynamics during protein folding [embedded just above this excerpt]

Numerous studies suggest that audio is processed roughly twice as fast as visual data in the human brain, and humans are better able to detect and remember subtle differences in a sequence of sounds than if the same sequence is represented visually, Scaletti said.

“In our auditory system, we’re really very attuned to small differences in frequency,” she said. “We use frequencies and combinations of frequencies to understand speech, for example.”

A protein spends most of its time in the folded state, so the researchers also came up with a “rarity” function to identify when the rare, fleeting moments of folding or unfolding took place.

The resulting sounds gave them insight into the process, revealing how some hydrogen bonds seem to speed up folding while others appear to slow it. They characterized these transitions, calling the fastest “highway,” the slowest “meander,” and the intermediate ones “ambiguous.”

Including the water molecules in the simulations and hydrogen-bonding analysis was essential to understanding the process, Gruebele said.

“Half of the energy from a protein-folding reaction comes from the water and not from the protein,” he said. “We really learned by doing sonification how water molecules settle into the right place on the protein and how they help the protein conformation change so that it finally becomes folded.”

While hydrogen bonds are not the only factor contributing to protein folding, these bonds often stabilize a transition from one folded state to another, Gruebele said. Other hydrogen bonds may temporarily impede proper folding. For example, a protein may get hung up in a repeating loop that involves one or more hydrogen bonds forming, breaking and forming again — until the protein eventually escapes from this cul de sac to continue its journey to its most stable folded state.

“Unlike the visualization, which looks like a total random mess, you actually hear patterns when you listen to this,” Gruebele said. “This is the stuff that was impossible to visualize but it’s easy to hear.”

The National Science Foundation, National Institutes of Health and Symbolic Sound Corporation supported this research.

Gruebele also is a professor in the Beckman Institute for Advanced Science and Technology and an affiliate of the Carl R. Woese Institute for Genomic Biology at the U. of I.

Here’s a link to and a citation for the paper,

Hydrogen bonding heterogeneity correlates with protein folding transition state passage time as revealed by data sonification by Carla Scaletti, Premila P. Samuel Russell, Kurt J. Hebel, Meredith M. Rickard, Mayank Boob, Franz Danksagmüller, Stephen A. Taylor, Taras V. Pogorelov, and Martin Gruebele. PNAS 121 (22) e2319094121 DOI: https://doi.org/10.1073/pnas.2319094121 Published: May 20, 2024

This paper is behind a paywall.

Why convert space data into sounds?

The Eagle Nebula (also known as M16 or the Pillars of Creation) was one of the 3 cosmic objects sonified and used in the study. Credit: X-ray: NASA/CXC/INAF/M.Guarcello et al.; Optical: NASA/STScI [downloaded from https://www.frontiersin.org/news/2024/03/25/communication-nasa-scientists-space-data-sounds]

Apparently, it’s all about communication or so a March 24, 2024 Frontiers news release (also on EurekAlert but published March 25, 2024) by Kim Arcand and Megan Watzke suggests, Note: Links have been removed,

Images from telescopes like the James Webb Space Telescope have expanded the way we see space. But what if you can’t see? Can stars be turned into sounds instead? In this guest editorial, NASA [US National Aeronautics and Space Administration] scientists and science communicators Dr Kimberly Arcand and Megan Watzke explain how and why they and their colleagues transformed telescope data into soundscapes to share space science with the whole world. To learn more, read their new research published in Frontiers in Communication.

When you travel somewhere where they speak a language you can’t understand, it’s usually important to find a way to translate what’s being communicated to you. In some ways, the same can be said about scientific data collected from cosmic objects. A telescope like NASA’s Chandra X-ray Observatory captures X-rays, which are invisible to the human eye, from sources across the cosmos. Similarly, the James Webb Space Telescope captures infrared light, also invisible to the human eye. These different kinds of light are transmitted down to Earth packed up in the form of ones and zeroes. From there, the data are transformed into a variety of formats — from plots to spectra to images.

This last category — images — is arguably what telescopes are best known for. For most of astronomy’s long history, however, most people who are blind or low vision (BLV) have not been able to fully experience the data that these telescopes have captured. NASA’s Universe of Sound data sonification program, with NASA’s Chandra X-ray Observatory and NASA’s Universe of Learning, translates visual data of objects in space into sonified data. All telescopes — including Chandra, Webb, the Hubble Space Telescope, plus dozens of others — in space need to send the data they collect back to Earth as binary code, or digital signals. Typically, astronomers and others turn these digital data into images, which are often spectacular and make their way into everything from websites to pillowcases.

The music of the spheres

By taking these data through another step, however, experts on this project mathematically map the information into sound. This data-driven process is not a reimagining of what the telescopes have observed, it is yet another kind of translation. Instead of a translation from French to Mandarin, it’s a translation from visual to sound. Releases from the Universe of Sound sonification project have been immensely popular with non-experts, from viral news stories with over two billion people potentially reached according to press metrics, to triple the usual Chandra.si.edu website traffic.

But how are such data sonifications perceived by people, particularly members of the BLV community? How do data sonifications affect participant learning, enjoyment, and exploration of astronomy? Can translating scientific data into sound help enable trust or investment, emotionally or intellectually, in scientific data? Can such sonifications help improve awareness of accessibility needs that others might have?

Listening closely

This study used our sonified NASA data of three astronomical objects. We surveyed blind or low-vision and sighted individuals to better understand participant experiences of the sonifications, relating to their enjoyment, understanding, and trust of the scientific data. Data analyses from 3,184 sighted or blind or low-vision participants yielded significant self-reported learning gains and positive experiential responses.

The results showed that astrophysical data engaging multiple senses like the sonifications could establish additional avenues of trust, increase access, and promote awareness of accessibility in sighted and blind or low-vision communities. In short, sonifications helped people access and engage with the Universe.

Sonification is an evolving and collaborative field. It is a project not only done for the BLV community, but with BLV partnerships. A new documentary available on NASA’s free streaming platform NASA+ explores how these sonifications are made and the team behind them. The hope is that sonifications can help communicate the scientific discoveries from our Universe with more audiences, and open the door to the cosmos just a little wider for everyone.

Here’s a link to and a citation for the paper,

A Universe of Sound: processing NASA data into sonifications to explore participant response by Kimberly Kowal Arcand, Jessica Sarah Schonhut-Stasik, Sarah G. Kane, Gwynn Sturdevant, Matt Russo, Megan Watzke, Brian Hsu, Lisa F. Smith. Front. Commun., 13 March 2024 Volume 9 – 2024 DOI: https://doi.org/10.3389/fcomm.2024.1288896

This paper is open access.