Tag Archives: musical compositions

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

What human speech, jazz, and whale song have in common

Credit: iStock/Velvetfish

Seeing connections between what seem to be unrelated activities such as human speech, jazz, and whale song is fascinating to me and I’m not alone. Scientists at the University of California at Merced (UC Merced) have delivered handily on that premise according to an Oct. 13, 2017 news item on phys.org,

Jazz musicians riffing with each other, humans talking to each other and pods of killer whales all have interactive conversations that are remarkably similar to each other, new research reveals.

Cognitive science researchers at UC Merced have developed a new method for analyzing and comparing the sounds of speech, music and complex animal vocalizations like whale song and bird song. The paper detailing their findings is being published today [Oct. 12, 2017] in the Journal of the Royal Society Interface.

Their method is based on the idea that these sounds are complex because they have multiple layers of structure. Every language, for instance, has individuals sounds, roughly corresponding to letters, that combine to form syllables, words, phrases, sentences and so on. It’s a hierarchy that everyone understands intuitively. Musical compositions have their own temporal hierarchies, but until now there hasn’t been a way to directly compare the hierarchies of speech and music, or test whether similar hierarchies might exist in bird song and whale song.

An Oct. 12, 2017 UC Merced news release by Lorena Anderson, which originated the news item, provides more details about the investigation (Note: Links have been removed),

“Playing jazz music has been likened to a conversation among musicians, and killer whales are highly social creatures who vocalize as if they are talking to each other. But does jazz music really sound like a conversation, and do killer whales really sound like they are talking?” asked lead researcher and UC Merced professor Chris Kello. “We know killer whales are highly social and intelligent, but it’s hard to tell that they are interacting when you listen to recordings of them. Our method shows how much their sound patterns are like people talking, but not like other, less social whales or birds.”

The researchers figured out a way to measure and compare sound recordings by converting them into “barcodes” that capture clusters of sound energy, and clusters of clusters, across levels of a hierarchy. These barcodes allowed the researchers to directly compare temporal hierarchies in more than 200 recordings of different kinds of speech in six different languages, different kinds of popular and classical music, four different species of birds and whales singing their songs, and even thunderstorms.

Kello and his colleagues have been using the barcode method for several years. They first developed it in studies of conversations. The study published today is the first time that they applied the method to music and animal vocalizations.

“The method allows us to ask questions about language and music and animal songs that we couldn’t ask without a way to see and compare patterns in all these recordings,” Kello said.

A common song

The researchers compared barcode-style visualizations of recorded sounds.
Credit: UC Merced

Kello, fellow UC Merced cognitive science professor Ramesh Balasubramaniam, graduate student Butovens Me´de´ [or Médé] and collaborator professor Simone Dalla Bella also discovered that the haunting songs of huge humpback whales are remarkably similar to the beautiful songs of tiny nightingales and hermit thrushes in terms of their temporal hierarchies.

“Humpbacks, nightingales and hermit thrushes are solitary singers,” Kello said. “The barcodes show that their songs have similar layers of structure, but we don’t know what it means — yet.”

The idea for this project came from Kello’s sabbatical at the University of Montpellier in France, where he worked and discussed ideas with Dalla Bella. Balasubramaniam, who studies how music is perceived, is in the School of Social Sciences, Humanities and Arts with Kello, who studies speech and language processing. The project was a natural collaboration and is part of a growing research focus at UC Merced that was enabled by the National Science Foundation-funded CHASE summer school on Music and Language in 2014, and a Google Faculty Award to Kello.

Balasubramaniam is interested in continuing the work to better understand how brains distinguish between music and speech, while Kello said there are many different avenues to pursue.

For instance, the researchers found nearly identical temporal hierarchies for six different languages, which may suggest something universal about human speech. However, because this result was based on recordings of TED Talks — which have a common style and progression — Kello said it will be important to keep looking at other forms of speech and language.

One of his graduate students, Sara Schneider, is using the method to study the convergence of Spanish and English barcodes in bilingual conversations. Another graduate student, Adolfo Ramirez-Aristizabal, is working with Kello and Balasubramaniam to study whether the barcode method may shed light on how brains process speech and other complex sounds.

“Listening to music and speech, we can hear some of what we see in the barcodes, and the information may be useful for automatic classification of audio recordings. But that doesn’t mean that our brains process music and speech using these barcodes,” Kello said. “It’s intriguing, but we need to keep asking questions and go where the data lead us.”

Here’s a link to and a citation for the paper,

Hierarchical temporal structure in music, speech and animal vocalizations: jazz is like a conversation, humpbacks sing like hermit thrushes by Christopher T. Kello, Simone Dalla Bella, Butovens Médé, Ramesh Balasubramaniam. Journal of the Royal Society Interface DOI: 10.1098/rsif.2017.0231 Published 11 October 2017

This paper appears to be open access.*

*”This paper is behind a paywall” was changed to “… appears to be open access.” at 1700 hours on January 23, 2018.