Representing data though music is how a Jan. 31, 2014 item on the BBC news magazine describes a Voyager 1 & 2 spacecraft duet, data sonification project discussed* in a BBC Radio 4 programme,
Musician and physicist Domenico Vicinanza has described to BBC Radio 4’s Today programme the process of representing information through music, known as “sonification”. [includes a sound clip and interview with Vicinanza]
A Jan. 22, 2014 GÉANT news release describes the project in more detail,
GÉANT, the pan-European data network serving 50 million research and education users at speeds of up to 500Gbps, recently demonstrated its power by sonifying 36 years’ worth of NASA Voyager spacecraft data and converting it into a musical duet.
The project is the work of Domenico Vicinanza, Network Services Product Manager at GÉANT. As a trained musician with a PhD in Physics, he also takes the role of Arts and Humanities Manager, exploring new ways for representing data and discovery through the use of high-speed networks.
“I wanted to compose a musical piece celebrating the Voyager 1 and 2 *together*, so used the same measurements (proton counts from the cosmic ray detector over the last 37 years) from both spacecrafts, at the exactly same point of time, but at several billions of Kms of distance one from the other.
I used different groups of instruments and different sound textures to represent the two spacecrafts, synchronising the measurements taken at the same time.”
The result is an up-tempo string and piano orchestral piece.
You can hear the duet, which has been made available by the folks at GÉANT,
The news release goes on to provide technical details about the composition,
To compose the spacecraft duet, 320,000 measurements were first selected from each spacecraft, at one hour intervals. Then that data was converted into two very long melodies, each comprising 320,000 notes using different sampling frequencies, from a few KHz to 44.1 kHz.
The result of the conversion into waveform, using such a big dataset, created a wide collection of audible sounds, lasting just a few seconds (slightly more than 7 seconds at 44.1kHz) to a few hours (more than 5hours using 1024Hz as a sampling frequency). A certain number of data points, from a few thousand to 44,100 were each “converted” into 1 second of sound.
Using the grid computing facilities at EGI, GÉANT was able to create the duet live at the NASA booth at Super Computing 2013 using its superfast network to transfer data to/from NASA.
I think this detail from the news release gives one a different perspective on the accomplishment,
Launched in 1977, both Voyager 1 and Voyager 2 are now decommissioned but still recording and sending live data to Earth. They continue to traverse different parts of the universe, billions of kilometres apart. Voyager 1 left our solar system last year.
The research is more than an amusing way to pass the time (from the news release),
While this project was created as a fun, accessible way to demonstrate the benefit of research and education networks to society, data sonification – representing data by means of sound signals – is increasingly used to accelerate scientific discovery; from epilepsy research to deep space discovery.
I was curious to learn more about how data represented by sound signals is being used to accelerate scientific discovery and sent that question and another to Dr. Vicinanza via Tamsin Henderson of DANTE and received these answers,
(1) How does “representing data by means of sound signals “increasingly accelerate scientific discovery; from epilepsy research to deep space discovery”? In a practical sense how does one do this research? For example, do you sit down and listen to a file and intuit different relationships for the data?
Vision and visual representation is intrinsically limited to three dimensions. We all know how amazing is 3D cinema, but in terms of representation of complex information, this is as far as it gets. There is no 4D or 5D. We live in three dimensions.
Sound, on the other hand, does not have any limitation of this kind. We can continue overlapping sound layers virtually without limits and still retain the capability of recognising and understanding them. Think of an orchestra or a pop band, even if the musicians are playing all together we can actually follow the single instrument line (bass, drum, lead guitar, voice, ….) Sound is then particularly precious when dealing with multi-dimensional data since audification techniques.
In technical terms, auditory perception of complex, structured information could have several advantages in temporal, amplitude, and frequency resolution when compared to visual representations and often opens up possibilities as an alternative or complement to visualisation techniques. Those advantages include the capability of the human ear to detect patterns (detecting regularities), recognise timbres and follow different strands at the same time (i.e. the capability of following different instrument lines). This would offer, in a natural way, the opportunity of rendering different, interdependent variables onto sounds in such a way that a listener could gain relevant insight into the represented information or data.
In particular in the medical context, there have been several investigations using data sonification as a support tool for classification and diagnosis, from working on sonification of medical images to converting EEG to tones, including real-time screening and feedback on EEG signals for epilepsy.
The idea is to use sound to aggregate many “information layers”, many more than any graph or picture can represent and support the physician giving a more comprehensive representation of the situation.
(2) I understand that as you age certain sounds disappear from your hearing, e.g., people over 25 years of age are not be able to hear above 15kHz. (Note: There seems to be some debate as to when these sounds disappear, after 30, after 20, etc.) Wouldn’t this pose an age restriction on the people who could access the research or have I misunderstood what you’re doing?
No, there is actually no sensible reduction in the advantages of sonification with ageing. The only precaution is not to use too high frequencies (above 15 KHz) in the sonification and this is something that can be avoided without limiting the benefits of audification.
It is always good practice not to use excessively high frequencies since they are not always very well and uniformly perceived by everyone.
Our hearing works at its best in the region of KHz (1200Hz-3800Hz)
Thank you Dr. Vicinanza and Tamsin Henderson for this insight into representing data in multiple dimensions using sound and its application in research. And, thank you, too, for sharing a beautiful piece of music.
For the curious, I found some additional information about Dr. Vicinanza and his ‘sound’ work on his Nature Network profile page,
I am a composer, network engineer and researcher. I received my MSc and PhD degrees in Physics and studied piano, percussion and composition.
I worked as a professor of Sound Synthesis, Acoustics and Computer Music (Algorithmic Composition) at Conservatory of Music of Salerno (Italy).
I currently work as a network engineer in DANTE (www.dante.net) and chair the ASTRA project (www.astraproject.org) for the reconstruction of musical instruments by means of computer models on GÉANT and EUMEDCONNECT.
I am also the co-founder and the technical coordinator of the Lost Sound Orchestra project (www.lostsoundsorchestra.org).
As a composer and researcher I was always fascinated by the richness of the information coming from the Nature. I worked on the introduction of the sonification of seismic signals (in particular coming from active volcanoes) as a scientific tool, co-working with geophysicists and volcanologists.
I also study applications of grid technologies for music and visual arts and as a composer I took part to several concerts, digital arts performances, festivals and webcast.
My other interests include (aside with music) Argentine Tango and watercolors.
ASTRA (Ancient instruments Sound/Timbre Reconstruction Application)
The ASTRA project is a multi disciplinary project aiming at reconstructing the sound or timbre of ancient instruments (not existing anymore) using archaeological data as fragments from excavations, written descriptions, pictures.
The technique used is the physical modeling synthesis, a complex digital audio rendering technique which allows modeling the time-domain physics of the instrument.
In other words the basic idea is to recreate a model of the musical instrument and produce the sound by simulating its behavior as a mechanical system. The application would produce one or more sounds corresponding to different configurations of the instrument (i.e. the different notes).
Lost Sounds Orchestra
The Lost Sound Orchestra is the ASTRA project orchestra. It is a unique orchestra made by reconstructed ancient instrument coming from the ASTRA research activities. It is the first ensemble in the world composed of only reconstructed instruments of the past. Listening to it is like jumping into the past, in a sound world completely new to our ears.
Since I haven’t had occasion to mention either GÉANT or DANTE previously, here’s more about those organizations and some acknowledgements from the news release,
GÉANT is the pan-European research and education network that interconnects Europe’s National Research and Education Networks (NRENs). Together we connect over 50 million users at 10,000 institutions across Europe, supporting research in areas such as energy, the environment, space and medicine.
Operating at speeds of up to 500Gbps and reaching over 100 national networks worldwide, GÉANT remains the largest and most advanced research and education network in the world.
Co-funded by the European Commission under the EU’s 7th Research and Development Framework Programme, GÉANT is a flagship e-Infrastructure key to achieving the European Research Area – a seamless and open European space for online research – and assuring world-leading connectivity between Europe and the rest of the world in support of global research collaborations.
The network and associated services comprise the GÉANT (GN3plus) project, a collaborative effort comprising 41 project partners: 38 European NRENs, DANTE, TERENA and NORDUnet (representing the 5 Nordic countries). GÉANT is operated by DANTE on behalf of Europe’s NRENs.
DANTE (Delivery of Advanced Network Technology to Europe) is a non-profit organisation established in 1993 that plans, builds and operates large scale, advanced networks for research and education. On behalf of Europe’s National Research and Education Networks (NRENs), DANTE has built and operates GÉANT, a flagship e-Infrastructure key to achieving the European Research Area.
Working in cooperation with the European Commission and in close partnership with Europe’s NRENs and international networking partners, DANTE remains fundamental to the success of global research collaboration.
DANTE manages research and education (R&E) networking projects serving Europe (GÉANT), the Mediterranean (EUMEDCONNECT), Sub-Saharan Africa (AfricaConnect), Central Asia (CAREN) regions and coordinates Europe-China collaboration (ORIENTplus). DANTE also supports R&E networking organisations in Latin America (RedCLARA), Caribbean (CKLN) and Asia-Pacific (TEIN*CC). For more information, visit www.dante.net
NASA National Space Science Data Center and the John Hopkins University Voyager LEPC experiment.
Mariapaola Sorrentino and Giuseppe La Rocca.
I hope one of these days I’ll have a chance to ask a data visualization expert whether they think it’s possible to represent multiple dimensions visually and whether or not some types of data are better represented by sound.
* ‘described’ replaced by ‘discussed’ to avoid repetition, Feb. 10, 2014. (Sometimes I’m miffed by my own writing.)