Tag Archives: sonification

Sonifying the protein folding process

A sonification and animation of a state machine based on a simple lattice model used by Martin Gruebele to teach concepts of protein-folding dynamics. First posted January 25, 2022 on YouTube.

A February 17, 2022 news item on ScienceDaily announces the work featured in the animation above,

Musicians are helping scientists analyze data, teach protein folding and make new discoveries through sound.

A team of researchers at the University of Illinois Urbana-Champaign is using sonification — the use of sound to convey information — to depict biochemical processes and better understand how they happen.

Music professor and composer Stephen Andrew Taylor; chemistry professor and biophysicist Martin Gruebele; and Illinois music and computer science alumna, composer and software designer Carla Scaletti formed the Biophysics Sonification Group, which has been meeting weekly on Zoom since the beginning of the pandemic. The group has experimented with using sonification in Gruebele’s research into the physical mechanisms of protein folding, and its work recently allowed Gruebele to make a new discovery about the ways a protein can fold.

A February 17, 2022 University of Illinois at Urbana-Champaign news release (also on EurekAlert), which originated the news item, describes how the group sonifies and animates the protein folding process (Note: Links have been removed),

Taylor’s musical compositions have long been influenced by science, and recent works represent scientific data and biological processes. Gruebele also is a musician who built his own pipe organ that he plays and uses to compose music. The idea of working together on sonification struck a chord with them, and they’ve been collaborating for several years. Through her company, Symbolic Sound Corp., Scaletti develops a digital audio software and hardware sound design system called Kyma that is used by many musicians and researchers, including Taylor.

Scaletti created an animated visualization paired with sound that illustrated a simplified protein-folding process, and Gruebele and Taylor used it to introduce key concepts of the process to students and gauge whether it helped with their understanding. They found that sonification complemented and reinforced the visualizations and that, even for experts, it helped increase intuition for how proteins fold and misfold over time. The Biophysics Sonification Group – which also includes chemistry professor Taras Pogorelov, former chemistry graduate student (now alumna) Meredith Rickard, composer and pipe organist Franz Danksagmüller of the Lübeck Academy of Music in Germany, and Illinois electrical and computer engineering alumnus Kurt Hebel of Symbolic Sound – described using sonification in teaching in the Journal of Chemical Education.

Gruebele and his research team use supercomputers to run simulations of proteins folding into a specific structure, a process that relies on a complex pattern of many interactions. The simulation reveals the multiple pathways the proteins take as they fold, and also shows when they misfold or get stuck in the wrong shape – something thought to be related to a number of diseases such as Alzheimer’s and Parkinson’s.

The researchers use the simulation data to gain insight into the process. Nearly all data analysis is done visually, Gruebele said, but massive amounts of data generated by the computer simulations – representing hundreds of thousands of variables and millions of moments in time – can be very difficult to visualize.

“In digital audio, everything is a stream of numbers, so actually it’s quite natural to take a stream of numbers and listen to it as if it’s a digital recording,” Scaletti said. “You can hear things that you wouldn’t see if you looked at a list of numbers and you also wouldn’t see if you looked at an animation. There’s so much going on that there could be something that’s hidden, but you could bring it out with sound.”

For example, when the protein folds, it is surrounded by water molecules that are critical to the process. Gruebele said he wants to know when a water molecule touches and solvates a protein, but “there are 50,000 water molecules moving around, and only one or two are doing a critical thing. It’s impossible to see.” However, if a splashy sound occurred every time a water molecule touched a specific amino acid, that would be easy to hear.

Taylor and Scaletti use various audio-mapping techniques to link aspects of proteins to sound parameters such as pitch, timbre, loudness and pan position. For example, Taylor’s work uses different pitches and instruments to represent each unique amino acid, as well as their hydrophobic or hydrophilic qualities.

“I’ve been trying to draw on our instinctive responses to sound as much as possible,” Taylor said. “Beethoven said, ‘The deeper the stream, the deeper the tone.’ We expect an elephant to make a low sound because it’s big, and we expect a sparrow to make a high sound because it’s small. Certain kinds of mappings are built into us. As much as possible, we can take advantage of those and that helps to communicate more effectively.”

The highly developed instincts of musicians help in creating the best tool to use sound to convey information, Taylor said.

“It’s a new way of showing how music and sound can help us understand the world. Musicians have an important role to play,” he said. “It’s helped me become a better musician, in thinking about sound in different ways and thinking how sound can link to the world in different ways, even the world of the very small.”

Here’s a link to and a citation for the paper,

Sonification-Enhanced Lattice Model Animations for Teaching the Protein Folding Reaction by Carla Scaletti, Meredith M. Rickard, Kurt J. Hebel, Taras V. Pogorelov, Stephen A. Taylor, and Martin Gruebele. J. Chem. Educ. 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.jchemed.1c00857 Publication Date:February 16, 2022 © 2022 American Chemical Society and Division of Chemical Education, Inc.

This paper is behind a paywall.

For more about sonification and proteins, there’s my March 31, 2022 posting, Classical music makes protein songs easier listening.

A 3D spider web, a VR (virtual reality) setup, and sonification (music)

Markus Buehler and his musical spider webs are making news again.

Caption: Cross-sectional images (shown in different colors) of a spider web were combined into this 3D image and translated into music. Credit: Isabelle Su and Markus Buehler

The image (so pretty) you see in the above comes from a Markus Buehler presentation that was made at the American Chemical Society (ACS) meeting. ACS Spring 2021 being held online April 5-30, 2021. The image was also shown during a press conference which the ACS has made available for public viewing. More about that later in this posting.

The ACS issued an April 12, 2021 news release (also on EurekAlert), which provides details about Buehler’s latest work on spider webs and music,

Spiders are master builders, expertly weaving strands of silk into intricate 3D webs that serve as the spider’s home and hunting ground. If humans could enter the spider’s world, they could learn about web construction, arachnid behavior and more. Today, scientists report that they have translated the structure of a web into music, which could have applications ranging from better 3D printers to cross-species communication and otherworldly musical compositions.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30 [2021]. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30 [2021]. The meeting features nearly 9,000 presentations on a wide range of science topics.

“The spider lives in an environment of vibrating strings,” says Markus Buehler, Ph.D., the project’s principal investigator, who is presenting the work. “They don’t see very well, so they sense their world through vibrations, which have different frequencies.” Such vibrations occur, for example, when the spider stretches a silk strand during construction, or when the wind or a trapped fly moves the web.

Buehler, who has long been interested in music, wondered if he could extract rhythms and melodies of non-human origin from natural materials, such as spider webs. “Webs could be a new source for musical inspiration that is very different from the usual human experience,” he says. In addition, by experiencing a web through hearing as well as vision, Buehler and colleagues at the Massachusetts Institute of Technology (MIT), together with collaborator Tomás Saraceno at Studio Tomás Saraceno, hoped to gain new insights into the 3D architecture and construction of webs.

With these goals in mind, the researchers scanned a natural spider web with a laser to capture 2D cross-sections and then used computer algorithms to reconstruct the web’s 3D network. The team assigned different frequencies of sound to strands of the web, creating “notes” that they combined in patterns based on the web’s 3D structure to generate melodies. The researchers then created a harp-like instrument and played the spider web music in several live performances around the world.

The team also made a virtual reality setup that allowed people to visually and audibly “enter” the web. “The virtual reality environment is really intriguing because your ears are going to pick up structural features that you might see but not immediately recognize,” Buehler says. “By hearing it and seeing it at the same time, you can really start to understand the environment the spider lives in.”

To gain insights into how spiders build webs, the researchers scanned a web during the construction process, transforming each stage into music with different sounds. “The sounds our harp-like instrument makes change during the process, reflecting the way the spider builds the web,” Buehler says. “So, we can explore the temporal sequence of how the web is being constructed in audible form.” This step-by-step knowledge of how a spider builds a web could help in devising “spider-mimicking” 3D printers that build complex microelectronics. “The spider’s way of ‘printing’ the web is remarkable because no support material is used, as is often needed in current 3D printing methods,” he says.

In other experiments, the researchers explored how the sound of a web changes as it’s exposed to different mechanical forces, such as stretching. “In the virtual reality environment, we can begin to pull the web apart, and when we do that, the tension of the strings and the sound they produce change. At some point, the strands break, and they make a snapping sound,” Buehler says.

The team is also interested in learning how to communicate with spiders in their own language. They recorded web vibrations produced when spiders performed different activities, such as building a web, communicating with other spiders or sending courtship signals. Although the frequencies sounded similar to the human ear, a machine learning algorithm correctly classified the sounds into the different activities. “Now we’re trying to generate synthetic signals to basically speak the language of the spider,” Buehler says. “If we expose them to certain patterns of rhythms or vibrations, can we affect what they do, and can we begin to communicate with them? Those are really exciting ideas.”

You can go here for the April 12, 2021 ‘Making music from spider webs’ ACS press conference’ it runs about 30 mins. and you will hear some ‘spider music’ played.

Getting back to the image and spider webs in general, we are most familiar with orb webs (in the part of Canada where I from if nowhere else), which look like spirals and are 2D. There are several other types of webs some of which are 3D, like tangle webs, also known as cobwebs, funnel webs and more. See this March 18, 2020 article “9 Types of Spider Webs: Identification + Pictures & Spiders” by Zach David on Beyond the Treat for more about spiders and their webs. If you have the time, I recommend reading it.

I’ve been following Buehler’s spider web/music work for close to ten years now; the latest previous posting is an October 23, 2019 posting where you’ll find a link to an application that makes music from proteins (spider webs are made up of proteins; scroll down about 30% of the way; it’s in the 2nd to last line of the quoted text about the embedded video).

Here is a video (2 mins. 17 secs.) of a spider web music performance that Buehler placed on YouTube,

Feb 3, 2021

Markus J. Buehler

Spider’s Canvas/Arachonodrone show excerpt at Palais de Tokyo, Paris, on November 2018. Video by MIT CAST. More videos can be found on www.arachnodrone.com. The performance was commissioned by Studio Tomás Saraceno (STS), in the context of Saraceno’s carte blanche exhibition, ON AIR. Spider’s Canvas/Arachnodrone was performed by Isabelle Su and Ian Hattwick on the spider web instrument, Evan Ziporyn on the EWI (Electronic Wind Instrument), and Christine Southworth on the guitar and EBow (Electronic Bow)

You can find more about the spider web music and Buehler’s collaborators on http://www.arachnodrone.com/,

Spider’s Canvas / Arachnodrone is inspired by the multifaceted work of artist Tomas Saraceno, specifically his work using multiple species of spiders to make sculptural webs. Different species make very different types of webs, ranging not just in size but in design and functionality. Tomas’ own web sculptures are in essence collaborations with the spiders themselves, placing them sequentially over time in the same space, so that the complex, 3-dimensional sculptural web that results is in fact built by several spiders, working together.

Meanwhile, back among the humans at MIT, Isabelle Su, a Course 1 doctoral student in civil engineering, has been focusing on analyzing the structure of single-species spider webs, specifically the ‘tent webs’ of the cyrtophora citricola, a tropical spider of particular interest to her, Tomas, and Professor Markus Buehler. Tomas gave the department a cyrtophora spider, the department gave the spider a space (a small terrarium without glass), and she in turn built a beautiful and complex web. Isabelle then scanned it in 3D and made a virtual model. At the suggestion of Evan Ziporyn and Eran Egozy, she then ported the model into Unity, a VR/game making program, where a ‘player’ can move through it in numerous ways. Evan & Christine Southworth then worked with her on ‘sonifying’ the web and turning it into an interactive virtual instrument, effectively turning the web into a 1700-string resonating instrument, based on the proportional length of each individual piece of silk and their proximity to one another. As we move through the web (currently just with a computer trackpad, but eventually in a VR environment), we create a ‘sonic biome’: complex ‘just intonation’ chords that come in and out of earshot according to which of her strings we are closest to. That part was all done in MAX/MSP, a very flexible high level audio programming environment, which was connected with the virtual environment in Unity. Our new colleague Ian Hattwick joined the team focusing on sound design and spatialization, building an interface that allowed him the sonically ‘sculpt’ the sculpture in real time, changing amplitude, resonance, and other factors. During this performance at Palais de Tokyo, Isabelle toured the web – that’s what the viewer sees – while Ian adjusted sounds, so in essence they were together “playing the web.” Isabelle provides a space (the virtual web) and a specific location within it (by driving through), which is what the viewer sees, from multiple angles, on the 3 scrims. The location has certain acoustic potentialities, and Ian occupies them sonically, just as a real human performer does in a real acoustic space. A rough analogy might be something like wandering through a gothic cathedral or a resonant cave, using your voice or an instrument at different volumes and on different pitches to find sonorous resonances, echoes, etc. Meanwhile, Evan and Christine are improvising with the web instrument, building on Ian’s sound, with Evan on EWI (Electronic Wind Instrument) and Christine on electric guitar with EBow.

For the visuals, Southworth wanted to create the illusion that the performers were actually inside the web. We built a structure covered in sharkstooth scrim, with 3 projectors projecting in and through from 3 sides. Southworth created images using her photographs of local Lexington, MA spider webs mixed with slides of the scan of the web at MIT, and then mixed those images with the projection of the game, creating an interactive replica of Saraceno’s multi-species webs.

If you listen to the press conference, you will hear Buehler talk about practical applications for this work in materials science.

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

I hear the proteins singing

Points to anyone who recognized the paraphrasing of the title for the well-loved, Canadian movie, “I heard the mermaids singing.” In this case, it’s all about protein folding and data sonification (from an Oct. 20, 2016 news item on phys.org),

Transforming data about the structure of proteins into melodies gives scientists a completely new way of analyzing the molecules that could reveal new insights into how they work – by listening to them. A new study published in the journal Heliyon shows how musical sounds can help scientists analyze data using their ears instead of their eyes.

The researchers, from the University of Tampere in Finland, Eastern Washington University in the US and the Francis Crick Institute in the UK, believe their technique could help scientists identify anomalies in proteins more easily.

An Oct. 20, 2016 Elsevier Publishing press release on EurekAlert, which originated the news item, expands on the theme,

“We are confident that people will eventually listen to data and draw important information from the experiences,” commented Dr. Jonathan Middleton, a composer and music scholar who is based at Eastern Washington University and in residence at the University of Tampere. “The ears might detect more than the eyes, and if the ears are doing some of the work, then the eyes will be free to look at other things.”

Proteins are molecules found in living things that have many different functions. Scientists usually study them visually and using data; with modern microscopy it is possible to directly see the structure of some proteins.

Using a technique called sonification, the researchers can now transform data about proteins into musical sounds, or melodies. They wanted to use this approach to ask three related questions: what can protein data sound like? Are there analytical benefits? And can we hear particular elements or anomalies in the data?

They found that a large proportion of people can recognize links between the melodies and more traditional visuals like models, graphs and tables; it seems hearing these visuals is easier than they expected. The melodies are also pleasant to listen to, encouraging scientists to listen to them more than once and therefore repeatedly analyze the proteins.

The sonifications are created using a combination of Dr. Middleton’s composing skills and algorithms, so that others can use a similar process with their own proteins. The multidisciplinary approach – combining bioinformatics and music informatics – provides a completely new perspective on a complex problem in biology.

“Protein fold assignment is a notoriously tricky area of research in molecular biology,” said Dr. Robert Bywater from the Francis Crick Institute. “One not only needs to identify the fold type but to look for clues as to its many functions. It is not a simple matter to unravel these overlapping messages. Music is seen as an aid towards achieving this unraveling.”

The researchers say their molecular melodies can be used almost immediately in teaching protein science, and after some practice, scientists will be able to use them to discriminate between different protein structures and spot irregularities like mutations.

Proteins are the first stop, but our knowledge of other molecules could also benefit from sonification; one day we may be able to listen to our genomes, and perhaps use this to understand the role of junk DNA [emphasis mine].

About 97% of our DNA (deoxyribonucleic acid) has been known for some decades as ‘junk DNA’. In roughly 2012, that was notion was challenged as Stephen S. Hall wrote in an Oct. 1, 2012 article (Hidden Treasures in Junk DNA; What was once known as junk DNA turns out to hold hidden treasures, says computational biologist Ewan Birney) for Scientific American.

Getting back to  2016, here’s a link to and a citation for ‘protein singing’,

Melody discrimination and protein fold classification by  Robert P. Bywater, Jonathan N. Middleton. Heliyon 20 Oct 2016, Volume 2, Issue 10 DOI: 10.1016/j.heliyon.2016.e0017

This paper is open access.

Here’s what the proteins sound like,

Supplementary Audio 3 for file for Supplementary Figure 2 1r75 OHEL sonification full score. [downloaded from the previously cited Heliyon paper]

Joanna Klein has written an Oct. 21, 2016 article for the New York Times providing a slightly different take on this research (Note: Links have been removed),

“It’s used for the concert hall. It’s used for sports. It’s used for worship. Why can’t we use it for our data?” said Jonathan Middleton, the composer at Eastern Washington University and the University of Tampere in Finland who worked with Dr. Bywater.

Proteins have been around for billions of years, but humans still haven’t come up with a good way to visualize them. Right now scientists can shoot a laser at a crystallized protein (which can distort its shape), measure the patterns it spits out and simulate what that protein looks like. These depictions are difficult to sift through and hard to remember.

“There’s no simple equation like e=mc2,” said Dr. Bywater. “You have to do a lot of spade work to predict a protein structure.”

Dr. Bywater had been interested in assigning sounds to proteins since the 1990s. After hearing a song Dr. Middleton had composed called “Redwood Symphony,” which opens with sounds derived from the tree’s DNA, he asked for his help.

Using a process called sonification (which is the same thing used to assign different ringtones to texts, emails or calls on your cellphone) the team took three proteins and turned their folding shapes — a coil, a turn and a strand — into musical melodies. Each shape was represented by a bunch of numbers, and those numbers were converted into a musical code. A combination of musical sounds represented each shape, resulting in a song of simple patterns that changed with the folds of the protein. Later they played those songs to a group of 38 people together with visuals of the proteins, and asked them to identify similarities and differences between them. The two were surprised that people didn’t really need the visuals to detect changes in the proteins.

Plus, I have more about data sonification in a Feb. 7, 2014 posting regarding a duet based on data from Voyager 1 & 2 spacecraft.

Finally, I hope my next Steep project will include  sonification of data on gold nanoparticles. I will keep you posted on any developments.