Category Archives: Music

I hear the proteins singing

Points to anyone who recognized the paraphrasing of the title for the well-loved, Canadian movie, “I heard the mermaids singing.” In this case, it’s all about protein folding and data sonification (from an Oct. 20, 2016 news item on phys.org),

Transforming data about the structure of proteins into melodies gives scientists a completely new way of analyzing the molecules that could reveal new insights into how they work – by listening to them. A new study published in the journal Heliyon shows how musical sounds can help scientists analyze data using their ears instead of their eyes.

The researchers, from the University of Tampere in Finland, Eastern Washington University in the US and the Francis Crick Institute in the UK, believe their technique could help scientists identify anomalies in proteins more easily.

An Oct. 20, 2016 Elsevier Publishing press release on EurekAlert, which originated the news item, expands on the theme,

“We are confident that people will eventually listen to data and draw important information from the experiences,” commented Dr. Jonathan Middleton, a composer and music scholar who is based at Eastern Washington University and in residence at the University of Tampere. “The ears might detect more than the eyes, and if the ears are doing some of the work, then the eyes will be free to look at other things.”

Proteins are molecules found in living things that have many different functions. Scientists usually study them visually and using data; with modern microscopy it is possible to directly see the structure of some proteins.

Using a technique called sonification, the researchers can now transform data about proteins into musical sounds, or melodies. They wanted to use this approach to ask three related questions: what can protein data sound like? Are there analytical benefits? And can we hear particular elements or anomalies in the data?

They found that a large proportion of people can recognize links between the melodies and more traditional visuals like models, graphs and tables; it seems hearing these visuals is easier than they expected. The melodies are also pleasant to listen to, encouraging scientists to listen to them more than once and therefore repeatedly analyze the proteins.

The sonifications are created using a combination of Dr. Middleton’s composing skills and algorithms, so that others can use a similar process with their own proteins. The multidisciplinary approach – combining bioinformatics and music informatics – provides a completely new perspective on a complex problem in biology.

“Protein fold assignment is a notoriously tricky area of research in molecular biology,” said Dr. Robert Bywater from the Francis Crick Institute. “One not only needs to identify the fold type but to look for clues as to its many functions. It is not a simple matter to unravel these overlapping messages. Music is seen as an aid towards achieving this unraveling.”

The researchers say their molecular melodies can be used almost immediately in teaching protein science, and after some practice, scientists will be able to use them to discriminate between different protein structures and spot irregularities like mutations.

Proteins are the first stop, but our knowledge of other molecules could also benefit from sonification; one day we may be able to listen to our genomes, and perhaps use this to understand the role of junk DNA [emphasis mine].

About 97% of our DNA (deoxyribonucleic acid) has been known for some decades as ‘junk DNA’. In roughly 2012, that was notion was challenged as Stephen S. Hall wrote in an Oct. 1, 2012 article (Hidden Treasures in Junk DNA; What was once known as junk DNA turns out to hold hidden treasures, says computational biologist Ewan Birney) for Scientific American.

Getting back to  2016, here’s a link to and a citation for ‘protein singing’,

Melody discrimination and protein fold classification by  Robert P. Bywater, Jonathan N. Middleton. Heliyon 20 Oct 2016, Volume 2, Issue 10 DOI: 10.1016/j.heliyon.2016.e0017

This paper is open access.

Here’s what the proteins sound like,

Supplementary Audio 3 for file for Supplementary Figure 2 1r75 OHEL sonification full score. [downloaded from the previously cited Heliyon paper]

Joanna Klein has written an Oct. 21, 2016 article for the New York Times providing a slightly different take on this research (Note: Links have been removed),

“It’s used for the concert hall. It’s used for sports. It’s used for worship. Why can’t we use it for our data?” said Jonathan Middleton, the composer at Eastern Washington University and the University of Tampere in Finland who worked with Dr. Bywater.

Proteins have been around for billions of years, but humans still haven’t come up with a good way to visualize them. Right now scientists can shoot a laser at a crystallized protein (which can distort its shape), measure the patterns it spits out and simulate what that protein looks like. These depictions are difficult to sift through and hard to remember.

“There’s no simple equation like e=mc2,” said Dr. Bywater. “You have to do a lot of spade work to predict a protein structure.”

Dr. Bywater had been interested in assigning sounds to proteins since the 1990s. After hearing a song Dr. Middleton had composed called “Redwood Symphony,” which opens with sounds derived from the tree’s DNA, he asked for his help.

Using a process called sonification (which is the same thing used to assign different ringtones to texts, emails or calls on your cellphone) the team took three proteins and turned their folding shapes — a coil, a turn and a strand — into musical melodies. Each shape was represented by a bunch of numbers, and those numbers were converted into a musical code. A combination of musical sounds represented each shape, resulting in a song of simple patterns that changed with the folds of the protein. Later they played those songs to a group of 38 people together with visuals of the proteins, and asked them to identify similarities and differences between them. The two were surprised that people didn’t really need the visuals to detect changes in the proteins.

Plus, I have more about data sonification in a Feb. 7, 2014 posting regarding a duet based on data from Voyager 1 & 2 spacecraft.

Finally, I hope my next Steep project will include  sonification of data on gold nanoparticles. I will keep you posted on any developments.

Move objects by playing a melody

At this point, moving objects by playing a melody is a laboratory experiment but who knows, perhaps one day you’ll be able to sing your front door open. A Sept. 9, 2016 news item on ScienceDaily announces the research on acoustic waves,

Researchers of Aalto University have made a breakthrough in controlling the motion of multiple objects on a vibrating plate with a single acoustic source. By playing carefully constructed melodies, the scientists can simultaneously and independently move multiple objects on the plate towards desired targets. This has enabled scientists, for instance, writing words consisting of separate letters with loose metal pieces on the plate by playing a melody.

A Sept. 9, 2016 Aalto University press release (also on EurekAlert), which originated the news item, describes the research in more detail,

Already in 1878, the first studies of sand moving on a vibrating plate were done by Ernst Chladni, known as the father of acoustics. Chladni discovered that when a plate is vibrating at a frequency, objects move towards a few positions, called the nodal lines, specific to that frequency. Since then, the prevailing view has been that the particle motion is random on the plate before they reached the nodal line. “We have shown that the motion is also predictable away from the nodal lines. Now that the object does not have to be at a nodal line, we have much more freedom in controlling its motion and have achieved independent control of up to six objects simultaneously using just one single actuator. We are very excited about the results, because this probably is a new world record of how many independent motions can be controlled by a single acoustic actuator,” says Professor Quan Zhou.

The objects to be controlled have been placed on top of a manipulation plate, and imaged by a tracking camera. Based on the detected positions, the computer goes through a list of music notes to find a note that is most likely to move the objects towards the desired directions. After playing the note, the new positions of the objects are detected, and the control cycle is restarted. This cycle is repeated until the objects have reached their desired target locations. The notes played during the control cycles form a sequence, a bit like music.

The new method has been applied to manipulate a wide range of miniature objects including electronic components, water droplets, plant seeds, candy balls and metal parts. “Some of the practical applications we foresee include conveying and sorting microelectronic chips, delivering drug-loaded particles for pharmaceutical applications or handling small liquid volumes for lab on chips,” says Zhou. “Also, the basic idea should be transferrable to other kinds of systems with vibration phenomena. For example, it should be possible to use waves and ripples to control floating objects in a pond using our technique.”

Here’s a link to and a citation for the paper,

Controlling the motion of multiple objects on a Chladni plate by Quan Zhou, Veikko Sariola, Kourosh Latifi, Ville Liimatainen. Nature Communications 7, Article number: 12764 doi:10.1038/ncomms12764 Published 09 September 2016

This article is open access.

Repurposing music from Broadway hit Hamilton to give a science perspective

Thanks to David Bruggeman’s July 27, 2016 posting for information about a piece from Tim Blais (A Capella Science).

Before getting to the video: The musical “Hamilton” is about Alexander Hamilton, one of the founding fathers of the United States while Blais’ version, featuring a cast of YouTube purveyors of science (I recognized Baba Brinkman), is about  Sir William Rowan Hamilton, an extraordinary Irish physicist, astronomer, and mathematician.

I know I’ll need more than one viewing.

A Moebius strip of moving energy (vibrations)

This research extends a theorem which posits that waves will adapt to slowly changing conditions and return to their original vibration to note that the waves can be manipulated to a new state. A July 25, 2016 news item on ScienceDaily makes the announcement,

Yale physicists have created something similar to a Moebius strip of moving energy between two vibrating objects, opening the door to novel forms of control over waves in acoustics, laser optics, and quantum mechanics.

The discovery also demonstrates that a century-old physics theorem offers much greater freedom than had long been believed. …

A July 25, 2016 Yale University news release (also on EurekAlert) by Jim Shelton, which originated the news item, expands on the theme,

Yale’s experiment is deceptively simple in concept. The researchers set up a pair of connected, vibrating springs and studied the acoustic waves that traveled between them as they manipulated the shape of the springs. Vibrations — as well as other types of energy waves — are able to move, or oscillate, at different frequencies. In this instance, the springs vibrate at frequencies that merge, similar to a Moebius strip that folds in on itself.

The precise spot where the vibrations merge is called an “exceptional point.”

“It’s like a guitar string,” said Jack Harris, a Yale associate professor of physics and applied physics, and the study’s principal investigator. “When you pluck it, it may vibrate in the horizontal plane or the vertical plane. As it vibrates, we turn the tuning peg in a way that reliably converts the horizontal motion into vertical motion, regardless of the details of how the peg is turned.”

Unlike a guitar, however, the experiment required an intricate laser system to precisely control the vibrations, and a cryogenic refrigeration chamber in which the vibrations could be isolated from any unwanted disturbance.

The Yale experiment is significant for two reasons, the researchers said. First, it suggests a very dependable way to control wave signals. Second, it demonstrates an important — and surprising — extension to a long-established theorem of physics, the adiabatic theorem.

The adiabatic theorem says that waves will readily adapt to changing conditions if those changes take place slowly. As a result, if the conditions are gradually returned to their initial configuration, any waves in the system should likewise return to their initial state of vibration. In the Yale experiment, this does not happen; in fact, the waves can be manipulated into a new state.

“This is a very robust and general way to control waves and vibrations that was predicted theoretically in the last decade, but which had never been demonstrated before,” Harris said. “We’ve only scratched the surface here.”

In the same edition of Nature, a team from the Vienna University of Technology also presented research on a system for wave control via exceptional points.

Here’s a link to and a citation for the paper,

Topological energy transfer in an optomechanical system with exceptional points by H. Xu, D. Mason, Luyao Jiang, & J. G. E. Harris. Nature (2016) doi:10.1038/nature18604 Published online 25 July 2016

This paper is behind a paywall.

Curiosity Collider (Vancouver, Canada) presents Neural Constellations: Exploring Connectivity

I think of Curiosity Collider as an informal art/science  presenter but I gather the organizers’ ambitions are more grand. From the Curiosity Collider’s About Us page,

Curiosity Collider provides an inclusive community [emphasis mine] hub for curious innovators from any discipline. Our non-profit foundation, based in Vancouver, Canada, fosters participatory partnerships between science & technology, art & culture, business communities, and educational foundations to inspire new ways to experience science. The Collider’s growing community supports and promotes the daily relevance of science with our events and projects. Curiosity Collider is a catalyst for collaborations that seed and grow engaging science communication projects.

Be inspired by the curiosity of others. Our Curiosity Collider events cross disciplinary lines to promote creative inspiration. Meet scientists, visual and performing artists, culinary perfectionists, passionate educators, and entrepreneurs who share a curiosity for science.

Help us create curiosity for science. Spark curiosity in others with your own ideas and projects. Get in touch with us and use our curiosity events to showcase how your work creates innovative new ways to experience science.

I wish they hadn’t described themselves as an “inclusive community.” This often means exactly the opposite.

Take for example the website. The background is in black, the heads are white, and the text is grey. This is a website for people under the age of 40. If you want to be inclusive, you make your website legible for everyone.

That said, there’s an upcoming Curiosity Collider event which looks promising (from a July 20, 2016 email notice),

Neural Constellations: Exploring Connectivity

An Evening of Art, Science and Performance under the Dome

“We are made of star stuff,” Carl Sagan once said. From constellations to our nervous system, from stars to our neurons. We’re colliding neuroscience and astronomy with performance art, sound, dance, and animation for one amazing evening under the planetarium dome. Together, let’s explore similar patterns at the macro (astronomy) and micro (neurobiology) scale by taking a tour through both outer and inner space.

This show is curated by Curiosity Collider’s Creative Director Char Hoyt, along with Special Guest Curator Naila Kuhlmann, and developed in collaboration with the MacMillan Space Centre. There will also be an Art-Science silent auction to raise funding for future Curiosity Collider activities.

Participating performers include:

The July 20, 2016 notice also provides information about date, time, location, and cost,

When
7:30pm on Thursday, August 18th 2016. Join us for drinks and snacks when doors open at 6:30pm.

Where
H. R. MacMillan Space Centre (1100 Chestnut Street, Vancouver, BC)

Cost
$20.00 sliding scale. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization. Purchase tickets on our Eventbrite page.

Head to the Facebook event page: Let us know you are coming and share this event with others! We will also share event updates and performer profiles on the Facebook page.

There is a pretty poster,

CuriostiytCollider_AugEvent_NeuralConstellations

[downloaded from http://www.curiositycollider.org/events/]

Enjoy!

A selection of science songs for summer

Canada’s Perimeter Institute for Theoretical Physics (PI) has compiled a list of science songs and it includes a few Canadian surprises. Here’s more from the July 21, 2016 PI notice received via email.

Ah, summer.

School’s out, the outdoors beckon, and with every passing second a 4.5-billion-year-old nuclear fireball fuses 620 million tons of hydrogen so brightly you’ve gotta wear shades.

Who says you have to stop learning science over the summer?

All you need is the right soundtrack to your next road trip, backyard barbeque, or day at the beach.

Did we miss your favourite science song? Tweet us @Perimeter with the hashtag #SciencePlaylist.

You can find the list and accompanying videos on The Ultimate Science Playlist webpage on the PI website. Here are a few samples,

“History of Everything” – Barenaked Ladies (The Big Bang Theory theme)

You probably know this one as the theme song of The Big Bang Theory. But here’s something you might not know. The tune began as an improvised ditty Barenaked Ladies’ singer Ed Robertson performed one night in Los Angeles after reading Simon Singh’s book Big Bang: The Most Important Scientific Discovery of All Time and Why You Need to Know About It. Lo and behold, in the audience that night were Chuck Lorre and Bill Prady, creators of The Big Bang Theory. The rest is history (of everything).

“Bohemian Gravity” – A Capella Science (Tim Blais)

Tim Blais, the one-man choir behind A Capella Science, is a master at conveying complex science in fun musical parodies. “Bohemian Gravity” is his most famous, but be sure to also check out our collaboration with him about gravitational waves, “LIGO: Feel That Space.”

“NaCl” – Kate and Anna McGarrigle

“NaCl” is a romantic tale of the courtship of a chlorine atom and a sodium atom, who marry and become sodium chloride. “Think of the love you eat,” sings Kate McGarrigle, “when you salt your meat.”

This is just a sampling. At this point, there are 15 science songs on the webpage. Surprisingly, rap is not represented. One other note, you’ll notice all of my samples are Canadian. (Sadly, I had other videos as well but every time I saved a draft I lost at least half or more. It seems the maximum allowed to me is three.).

Here are the others I wanted to include:

“Mandelbrot Set” – Jonathan Coulton

Singer-songwriter Jonathan Coulton (JoCo, to fans) is arguably the patron saint of geek-pop, having penned the uber-catchy credits songs of the Portal games, as well as this loving tribute to a particular set of complex numbers that has a highly convoluted fractal boundary when plotted.

“Higgs Boson Sonification” – Traq 

CERN physicist Piotr Traczyk (a.k.a. Traq) “sonified” data from the experiment that uncovered the Higgs boson, turning the discovery into a high-energy metal riff.

“Why Does the Sun Shine?” – They Might Be Giants

Choosing just one song for this playlist by They Might Be Giants is a tricky task, since They Definitely Are Nerdy. But this one celebrates physics, chemistry, and astronomy while also being absurdly catchy, so it made the list. Honourable mention goes to their entire album for kids, Here Comes Science.

In any event, the PI list is a great introduction to science songs and The Ultimate Science Playlist includes embedded videos for all 15 of the songs selected so far. Happy Summer!

Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin

The upcoming performance of a quantum computer built by D-Wave Systems (a Canadian company) and Welsh mezzo soprano Juliette Pochin is the première of “Superposition” by Alexis Kirke. A July 13, 2016 news item on phys.org provides more detail,

What happens when you combine the pure tones of an internationally renowned mezzo soprano and the complex technology of a $15million quantum supercomputer?

The answer will be exclusively revealed to audiences at the Port Eliot Festival [Cornwall, UK] when Superposition, created by Plymouth University composer Alexis Kirke, receives its world premiere later this summer.

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A July 13, 2016 Plymouth University press release, which originated the news item, expands on the theme,

Combining the arts and sciences, as Dr Kirke has done with many of his previous works, the 15-minute piece will begin dark and mysterious with celebrated performer Juliette Pochin singing a low-pitched slow theme.

But gradually the quiet sounds of electronic ambience will emerge over or beneath her voice, as the sounds of her singing are picked up by a microphone and sent over the internet to the D-Wave quantum computer at the University of Southern California.

It then reacts with behaviours in the quantum realm that are turned into sounds back in the performance venue, the Round Room at Port Eliot, creating a unique and ground-breaking duet.

And when the singer ends, the quantum processes are left to slowly fade away naturally, making their final sounds as the lights go to black.

Dr Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, said:

“There are only a handful of these computers accessible in the world, and this is the first time one has been used as part of a creative performance. So while it is a great privilege to be able to put this together, it is an incredibly complex area of computing and science and it has taken almost two years to get to this stage. For most people, this will be the first time they have seen a quantum computer in action and I hope it will give them a better understanding of how it works in a creative and innovative way.”

Plymouth University is the official Creative and Cultural Partner of the Port Eliot Festival, taking place in South East Cornwall from July 28 to 31, 2016 [emphasis mine].

And Superposition will be one of a number of showcases of University talent and expertise as part of the first Port Eliot Science Lab. Being staged in the Round Room at Port Eliot, it will give festival goers the chance to explore science, see performances and take part in a range of experiments.

The three-part performance will tell the story of Niobe, one of the more tragic figures in Greek mythology, but in this case a nod to the fact the heart of the quantum computer contains the metal named after her, niobium. It will also feature a monologue from Hamlet, interspersed with terms from quantum computing.

This is the latest of Dr Kirke’s pioneering performance works, with previous productions including an opera based on the financial crisis and a piece using a cutting edge wave-testing facility as an instrument of percussion.

Geordie Rose, CTO and Founder, D-Wave Systems, said:

“D-Wave’s quantum computing technology has been investigated in many areas such as image recognition, machine learning and finance. We are excited to see Dr Kirke, a pioneer in the field of quantum physics and the arts, utilising a D-Wave 2X in his next performance. Quantum computing is positioned to have a tremendous social impact, and Dr Kirke’s work serves not only as a piece of innovative computer arts research, but also as a way of educating the public about these new types of exotic computing machines.”

Professor Daniel Lidar, Director of the USC Center for Quantum Information Science and Technology, said:

“This is an exciting time to be in the field of quantum computing. This is a field that was purely theoretical until the 1990s and now is making huge leaps forward every year. We have been researching the D-Wave machines for four years now, and have recently upgraded to the D-Wave 2X – the world’s most advanced commercially available quantum optimisation processor. We were very happy to welcome Dr Kirke on a short training residence here at the University of Southern California recently; and are excited to be collaborating with him on this performance, which we see as a great opportunity for education and public awareness.”

Since I can’t be there, I’m hoping they will be able to successfully livestream the performance. According to Kirke who very kindly responded to my query, the festival’s remote location can make livecasting a challenge. He did note that a post-performance documentary is planned and there will be footage from the performance.

He has also provided more information about the singer and the technical/computer aspects of the performance (from a July 18, 2016 email),

Juliette Pochin: I’ve worked with her before a couple of years ago. She has an amazing voice and style, is musically adventurousness (she is a music producer herself), and brings great grace and charisma to a performance. She can be heard in the Harry Potter and Lord of the Rings soundtracks and has performed at venues such as the Royal Albert Hall, Proms in the Park, and Meatloaf!

Score: The score is in 3 parts of about 5 minutes each. There is a traditional score for parts 1 and 3 that Juliette will sing from. I wrote these manually in traditional music notation. However she can sing in free time and wait for the computer to respond. It is a very dramatic score, almost operatic. The computer’s responses are based on two algorithms: a superposition chord system, and a pitch-loudness entanglement system. The superposition chord system sends a harmony problem to the D-Wave in response to Juliette’s approximate pitch amongst other elements. The D-Wave uses an 8-qubit optimizer to return potential chords. Each potential chord has an energy associated with it. In theory the lowest energy chord is that preferred by the algorithm. However in the performance I will combine the chord solutions to create superposition chords. These are chords which represent, in a very loose way, the superposed solutions which existed in the D-Wave before collapse of the qubits. Technically they are the results of multiple collapses, but metaphorically I can’t think of a more beautiful representation of superposition: chords. These will accompany Juliette, sometimes clashing with her. Sometimes giving way to her.

The second subsystem generates non-pitched noises of different lengths, roughnesses and loudness. These are responses to Juliette, but also a result of a simple D-Wave entanglement. We know the D-Wave can entangle in 8-qubit groups. I send a binary representation of the Juliette’s loudness to 4 qubits and one of approximate pitch to another 4, then entangle the two. The chosen entanglement weights are selected for their variety of solutions amongst the qubits, rather than by a particular musical logic. So the non-pitched subsystem is more of a sonification of entanglement than a musical algorithm.

Thank you Dr. Kirke for a fascinating technical description and for a description of Juliette Pochin that makes one long to hear her in performance.

For anyone who’s thinking of attending the performance or curious, you can find out more about the Port Eliot festival here, Juliette Pochin here, and Alexis Kirke here.

For anyone wondering about data sonficiatiion, I also have a Feb. 7, 2014 post featuring a data sonification project by Dr. Domenico Vicinanza which includes a sound clip of his Voyager 1 & 2 spacecraft duet.

Music videos for teaching science and a Baba Brinkman update

I have two news bits concerning science and music.

Music videos and science education

Researchers in the US and New Zealand have published a study on how effective music videos are for teaching science. Hint: there are advantages but don’t expect perfection. From a May 25, 2016 news item on ScienceDaily,

Does “edutainment” such as content-rich music videos have any place in the rapidly changing landscape of science education? A new study indicates that students can indeed learn serious science content from such videos.

The study, titled ‘Leveraging the power of music to improve science education’ and published by International Journal of Science Education, examined over 1,000 students in a three-part experiment, comparing learners’ understanding and engagement in response to 24 musical and non-musical science videos.

A May 25, 2016 Taylor & Francis (publishers) press release, which originated the news item, quickly gets to the point,

The central findings were that (1) across ages and genders, K-16 students who viewed music videos improved their scores on quizzes about content covered in the videos, and (2) students preferred music videos to non-musical videos covering equivalent content.  Additionally, the results hinted that videos with music might lead to superior long-term retention of the content.

“We tested most of these students outside of their normal classrooms,” commented lead author Greg Crowther, Ph.D., a lecturer at the University of Washington.  “The students were not forced by their teachers to watch these videos, and they didn’t have the spectre of a low course grade hanging over their heads.  Yet they clearly absorbed important information, which highlights the great potential of music to deliver key content in an appealing package.”

The study was inspired by the classroom experiences of Crowther and co-author Tom McFadden [emphasis mine], who teaches science at the Nueva School in California.  “Tom and I, along with many others, write songs for and with our students, and we’ve had a lot of fun doing that,” said Crowther.  “But rather than just assuming that this works, we wanted to see whether we could document learning gains in an objective way.”

The findings of this study have implications for teacher practitioners, policy-makers and researchers who are looking for innovative ways to improve science education.  “Music will always be a supplement to, rather than a substitute for, more traditional forms of teaching,” said Crowther.  “But teachers who want to connect with their students through music now have some additional data on their side.”

The paper is quite interesting (two of the studies were run in the US and one in New Zealand) and I notice that Don McFadden of the Science Rap Academy is one of the authors (more about him later); here’s a link to and a citation for the paper,

Leveraging the power of music to improve science education by Gregory J. Crowther, Tom McFadden, Jean S. Fleming, & Katie Davis.  International Journal of Science Education
Volume 38, Issue 1, 2016 pages 73-95. DOI: 10.1080/09500693.2015.1126001 Published online: 18 Jan 2016

This paper is open access. As I noted earlier, the research is promising but science music videos are not the answer to all science education woes.

One of my more recent pieces featuring Tom McFadden and his Science Rap Academy is this April 21, 2015 posting. The 2016 edition of the academy started in January 2016 according to David Bruggeman’s Jan. 27, 2016 posting on his Pasco Phronesis blog. You can find the Science Rap Academy’s YouTube channel here and the playlist here.

Canadian science rappers and musicians

I promised the latest about Baba Brinkman and here it is (from a May 14, 2016 notice received via email,

Not many people know this, but Dylan Thomas [legendary Welsh poet] was one of my early influences as a poet and one of the reasons I was inspired to pursue versification as a career. Well now Literature Wales has commissioned me to write and record a new rap/poem in celebration of Dylan Day 2016 (today [May 14, 20160) which I dutifully did. You can watch the video here to check out what a hip-hop flow and a Thomas poem have in common.

In other news, I’ll be performing a couple of one-off show over the next few weeks. Rap Guide to Religion is on at NECSS in New York on May 15 (tomorrow) [Note: Link removed as the event date has now been passed] and Rap Guide to Evolution is at the Reason Rally in DC June 2nd [2016]. I’m also continuing with the off-Broadway run of Rap Guide to Climate Chaos, recording the climate chaos album and looking to my next round of writing and touring, so if you have ideas about venues I could play please send me a note.

You can find out more about Baba Brinkman (a Canadian rapper who has written and  performed many science raps and lives in New York) here.

There’s another Canadian who produces musical science videos, Tim Blais (physicist and Montréaler) who was most recently featured here in a Feb. 12, 2016 posting. You can find a selection of Blais’ videos on his A Capella Science channel on YouTube.

Interconnected performance analysis music hub shared by McGill University and Université de Montréal announced* June 2, 2016

The press releases promise the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) will shape the future of music. The CIRMMT June 2, 2016 (Future of Music) press release (received via email) describes the funding support,

A significant investment of public and private support that will redefine the future of music research in Canada by transforming the way musicians compose,listen and perform music.

The Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), the Schulich School of Music of McGill University and the Faculty of Music of l’Université de Montréal are creating a unique interconnected research hub that will quite literally link two exceptional spaces at two of Canada’s most renowned music schools.

Imagine a new space and community where musicians, scientists and engineers join forces to gain a better understanding of the influence that music plays on individuals as well as their physical, psychological and even neurological conditions; experience the acoustics of an 18th century Viennese concert hall created with the touch of a fingertip; or attending an orchestral performance in one concert hall but hearing and seeing musicians performing from a completely different venue across town… All this and more will soon become possible here in Montreal!

The combination of public and private gifts will broaden our musical horizons exponentially thanks to significant investment for music research in Canada. With over $14.5 million in grants from the Canada Foundation for Innovation (CFI), the Government of Quebec and the Fonds de Recherche du Québec (FRQ), and a substantial contribution of an additional $2.5million gift from private philanthropy.

“We are grateful for this exceptional investment in music research from both the federal and provincial governments and from our generous donors,” says McGill Principal Suzanne Fortier. “This will further the collaboration between these two outstanding music schools and support the training of the next generation of music researchers and artists. For anyone who loves music, this is very exciting news.”

There’s not much technical detail in this one but here it is,

Digital channels coupling McGill University’s Music Multimedia Room (MMR – a large, sound-isolated performance lab) and l’Université de Montréal’s Salle Claude Champagne ([SCC -] a superb concert hall) will transform these two exceptional spaces into the world’s leading research facility for the scientific study of live performance, movement of recorded sound in space, and distributed performance (where musicians in different locations perform together).

“The interaction between scientific/technological research and artistic practice is one of the most fruitful avenues for future developments in both fields. This remarkable investment in music research is a wonderful recognition of the important contributions of the arts to Canadian society”, says Sean Ferguson, Dean of Schulich School of Music

The other CIRMMT June 2, 2016 (Collaborative hub) press  release (received via email) elaborates somewhat on the technology,

The MMR (McGill University’s Music Multimedia Room) will undergo complete renovations which include the addition of high quality variable acoustical treatment and a state-of-the-art rigging system. An active enhancement and sound spatialization system, together with stereoscopic projectors and displays, will provide virtual acoustic and immersive environments. At the SCC (l’Université de Montréal’s Salle Claude Champagne), the creation of a laboratory, a control room and a customizable rigging system will enable the installation and utilization of new research equipment’s in this acoustically-rich environment. These improvements will drastically augment the research possibilities in the hall, making it a unique hub in Canada for researchers to validate their experiments in a real concert hall.

“This infrastructure will provide exceptional spaces for performance analysis of multiple performers and audience members simultaneously, with equipment such as markerless motion-capture equipment and eye trackers. It will also connect both spaces for experimentations on distributed performances and will make possible new kinds of multimedia artworks.

The research and benefits

The research program includes looking at audio recording technologies, audio and video in immersive environments, and ultra-videoconferencing, leading to the development of new technologies for audio recording, film, television, distance education, and multi-media artworks; as well as a focus on cognition and perception in musical performance by large ensembles and on the rhythmical synchronization and sound blending of performers.

Social benefits include distance learning, videoconferencing, and improvements to the quality of both recorded music and live performance. Health benefits include improved hearing aids, noise reduction in airplanes and public spaces, and science-based music pedagogies and therapy. Economic benefits include innovations in sound recording, film and video games, and the training of highly qualified personnel across disciplines.

Amongst other activities they will be exploring data sonification as it relates to performance.

Hopefully, I’ll have more after the livestreamed press conference being held this afternoon, June 2, 2016,  (2:30 pm EST) at the CIRMMT.

*’opens’ changed to ‘announced’ on June 2, 2016 at 1335 hours PST.

ETA June 8, 2016: I did attend the press conference via livestream. There was some lovely violin played and the piece proved to be a demonstration of the work they’re hoping to expand on now that there will be a CIRMMT (pronounced kermit). There was a lot of excitement and I think that’s largely due to the number of years it’s taken to get to this point. One of the speakers reminisced about being a music student at McGill in the 1970s when they first started talking about getting a new music building.

They did get their building but have unable to complete it until these 2016 funds were awarded. Honestly, all the speakers seemed a bit giddy with delight. I wish them all congratulations!

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.