Category Archives: Music

Repurposing music from Broadway hit Hamilton to give a science perspective

Thanks to David Bruggeman’s July 27, 2016 posting for information about a piece from Tim Blais (A Capella Science).

Before getting to the video: The musical “Hamilton” is about Alexander Hamilton, one of the founding fathers of the United States while Blais’ version, featuring a cast of YouTube purveyors of science (I recognized Baba Brinkman), is about  Sir William Rowan Hamilton, an extraordinary Irish physicist, astronomer, and mathematician.

I know I’ll need more than one viewing.

A Moebius strip of moving energy (vibrations)

This research extends a theorem which posits that waves will adapt to slowly changing conditions and return to their original vibration to note that the waves can be manipulated to a new state. A July 25, 2016 news item on ScienceDaily makes the announcement,

Yale physicists have created something similar to a Moebius strip of moving energy between two vibrating objects, opening the door to novel forms of control over waves in acoustics, laser optics, and quantum mechanics.

The discovery also demonstrates that a century-old physics theorem offers much greater freedom than had long been believed. …

A July 25, 2016 Yale University news release (also on EurekAlert) by Jim Shelton, which originated the news item, expands on the theme,

Yale’s experiment is deceptively simple in concept. The researchers set up a pair of connected, vibrating springs and studied the acoustic waves that traveled between them as they manipulated the shape of the springs. Vibrations — as well as other types of energy waves — are able to move, or oscillate, at different frequencies. In this instance, the springs vibrate at frequencies that merge, similar to a Moebius strip that folds in on itself.

The precise spot where the vibrations merge is called an “exceptional point.”

“It’s like a guitar string,” said Jack Harris, a Yale associate professor of physics and applied physics, and the study’s principal investigator. “When you pluck it, it may vibrate in the horizontal plane or the vertical plane. As it vibrates, we turn the tuning peg in a way that reliably converts the horizontal motion into vertical motion, regardless of the details of how the peg is turned.”

Unlike a guitar, however, the experiment required an intricate laser system to precisely control the vibrations, and a cryogenic refrigeration chamber in which the vibrations could be isolated from any unwanted disturbance.

The Yale experiment is significant for two reasons, the researchers said. First, it suggests a very dependable way to control wave signals. Second, it demonstrates an important — and surprising — extension to a long-established theorem of physics, the adiabatic theorem.

The adiabatic theorem says that waves will readily adapt to changing conditions if those changes take place slowly. As a result, if the conditions are gradually returned to their initial configuration, any waves in the system should likewise return to their initial state of vibration. In the Yale experiment, this does not happen; in fact, the waves can be manipulated into a new state.

“This is a very robust and general way to control waves and vibrations that was predicted theoretically in the last decade, but which had never been demonstrated before,” Harris said. “We’ve only scratched the surface here.”

In the same edition of Nature, a team from the Vienna University of Technology also presented research on a system for wave control via exceptional points.

Here’s a link to and a citation for the paper,

Topological energy transfer in an optomechanical system with exceptional points by H. Xu, D. Mason, Luyao Jiang, & J. G. E. Harris. Nature (2016) doi:10.1038/nature18604 Published online 25 July 2016

This paper is behind a paywall.

Curiosity Collider (Vancouver, Canada) presents Neural Constellations: Exploring Connectivity

I think of Curiosity Collider as an informal art/science  presenter but I gather the organizers’ ambitions are more grand. From the Curiosity Collider’s About Us page,

Curiosity Collider provides an inclusive community [emphasis mine] hub for curious innovators from any discipline. Our non-profit foundation, based in Vancouver, Canada, fosters participatory partnerships between science & technology, art & culture, business communities, and educational foundations to inspire new ways to experience science. The Collider’s growing community supports and promotes the daily relevance of science with our events and projects. Curiosity Collider is a catalyst for collaborations that seed and grow engaging science communication projects.

Be inspired by the curiosity of others. Our Curiosity Collider events cross disciplinary lines to promote creative inspiration. Meet scientists, visual and performing artists, culinary perfectionists, passionate educators, and entrepreneurs who share a curiosity for science.

Help us create curiosity for science. Spark curiosity in others with your own ideas and projects. Get in touch with us and use our curiosity events to showcase how your work creates innovative new ways to experience science.

I wish they hadn’t described themselves as an “inclusive community.” This often means exactly the opposite.

Take for example the website. The background is in black, the heads are white, and the text is grey. This is a website for people under the age of 40. If you want to be inclusive, you make your website legible for everyone.

That said, there’s an upcoming Curiosity Collider event which looks promising (from a July 20, 2016 email notice),

Neural Constellations: Exploring Connectivity

An Evening of Art, Science and Performance under the Dome

“We are made of star stuff,” Carl Sagan once said. From constellations to our nervous system, from stars to our neurons. We’re colliding neuroscience and astronomy with performance art, sound, dance, and animation for one amazing evening under the planetarium dome. Together, let’s explore similar patterns at the macro (astronomy) and micro (neurobiology) scale by taking a tour through both outer and inner space.

This show is curated by Curiosity Collider’s Creative Director Char Hoyt, along with Special Guest Curator Naila Kuhlmann, and developed in collaboration with the MacMillan Space Centre. There will also be an Art-Science silent auction to raise funding for future Curiosity Collider activities.

Participating performers include:

The July 20, 2016 notice also provides information about date, time, location, and cost,

When
7:30pm on Thursday, August 18th 2016. Join us for drinks and snacks when doors open at 6:30pm.

Where
H. R. MacMillan Space Centre (1100 Chestnut Street, Vancouver, BC)

Cost
$20.00 sliding scale. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization. Purchase tickets on our Eventbrite page.

Head to the Facebook event page: Let us know you are coming and share this event with others! We will also share event updates and performer profiles on the Facebook page.

There is a pretty poster,

CuriostiytCollider_AugEvent_NeuralConstellations

[downloaded from http://www.curiositycollider.org/events/]

Enjoy!

A selection of science songs for summer

Canada’s Perimeter Institute for Theoretical Physics (PI) has compiled a list of science songs and it includes a few Canadian surprises. Here’s more from the July 21, 2016 PI notice received via email.

Ah, summer.

School’s out, the outdoors beckon, and with every passing second a 4.5-billion-year-old nuclear fireball fuses 620 million tons of hydrogen so brightly you’ve gotta wear shades.

Who says you have to stop learning science over the summer?

All you need is the right soundtrack to your next road trip, backyard barbeque, or day at the beach.

Did we miss your favourite science song? Tweet us @Perimeter with the hashtag #SciencePlaylist.

You can find the list and accompanying videos on The Ultimate Science Playlist webpage on the PI website. Here are a few samples,

“History of Everything” – Barenaked Ladies (The Big Bang Theory theme)

You probably know this one as the theme song of The Big Bang Theory. But here’s something you might not know. The tune began as an improvised ditty Barenaked Ladies’ singer Ed Robertson performed one night in Los Angeles after reading Simon Singh’s book Big Bang: The Most Important Scientific Discovery of All Time and Why You Need to Know About It. Lo and behold, in the audience that night were Chuck Lorre and Bill Prady, creators of The Big Bang Theory. The rest is history (of everything).

“Bohemian Gravity” – A Capella Science (Tim Blais)

Tim Blais, the one-man choir behind A Capella Science, is a master at conveying complex science in fun musical parodies. “Bohemian Gravity” is his most famous, but be sure to also check out our collaboration with him about gravitational waves, “LIGO: Feel That Space.”

“NaCl” – Kate and Anna McGarrigle

“NaCl” is a romantic tale of the courtship of a chlorine atom and a sodium atom, who marry and become sodium chloride. “Think of the love you eat,” sings Kate McGarrigle, “when you salt your meat.”

This is just a sampling. At this point, there are 15 science songs on the webpage. Surprisingly, rap is not represented. One other note, you’ll notice all of my samples are Canadian. (Sadly, I had other videos as well but every time I saved a draft I lost at least half or more. It seems the maximum allowed to me is three.).

Here are the others I wanted to include:

“Mandelbrot Set” – Jonathan Coulton

Singer-songwriter Jonathan Coulton (JoCo, to fans) is arguably the patron saint of geek-pop, having penned the uber-catchy credits songs of the Portal games, as well as this loving tribute to a particular set of complex numbers that has a highly convoluted fractal boundary when plotted.

“Higgs Boson Sonification” – Traq 

CERN physicist Piotr Traczyk (a.k.a. Traq) “sonified” data from the experiment that uncovered the Higgs boson, turning the discovery into a high-energy metal riff.

“Why Does the Sun Shine?” – They Might Be Giants

Choosing just one song for this playlist by They Might Be Giants is a tricky task, since They Definitely Are Nerdy. But this one celebrates physics, chemistry, and astronomy while also being absurdly catchy, so it made the list. Honourable mention goes to their entire album for kids, Here Comes Science.

In any event, the PI list is a great introduction to science songs and The Ultimate Science Playlist includes embedded videos for all 15 of the songs selected so far. Happy Summer!

Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin

The upcoming performance of a quantum computer built by D-Wave Systems (a Canadian company) and Welsh mezzo soprano Juliette Pochin is the première of “Superposition” by Alexis Kirke. A July 13, 2016 news item on phys.org provides more detail,

What happens when you combine the pure tones of an internationally renowned mezzo soprano and the complex technology of a $15million quantum supercomputer?

The answer will be exclusively revealed to audiences at the Port Eliot Festival [Cornwall, UK] when Superposition, created by Plymouth University composer Alexis Kirke, receives its world premiere later this summer.

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A July 13, 2016 Plymouth University press release, which originated the news item, expands on the theme,

Combining the arts and sciences, as Dr Kirke has done with many of his previous works, the 15-minute piece will begin dark and mysterious with celebrated performer Juliette Pochin singing a low-pitched slow theme.

But gradually the quiet sounds of electronic ambience will emerge over or beneath her voice, as the sounds of her singing are picked up by a microphone and sent over the internet to the D-Wave quantum computer at the University of Southern California.

It then reacts with behaviours in the quantum realm that are turned into sounds back in the performance venue, the Round Room at Port Eliot, creating a unique and ground-breaking duet.

And when the singer ends, the quantum processes are left to slowly fade away naturally, making their final sounds as the lights go to black.

Dr Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, said:

“There are only a handful of these computers accessible in the world, and this is the first time one has been used as part of a creative performance. So while it is a great privilege to be able to put this together, it is an incredibly complex area of computing and science and it has taken almost two years to get to this stage. For most people, this will be the first time they have seen a quantum computer in action and I hope it will give them a better understanding of how it works in a creative and innovative way.”

Plymouth University is the official Creative and Cultural Partner of the Port Eliot Festival, taking place in South East Cornwall from July 28 to 31, 2016 [emphasis mine].

And Superposition will be one of a number of showcases of University talent and expertise as part of the first Port Eliot Science Lab. Being staged in the Round Room at Port Eliot, it will give festival goers the chance to explore science, see performances and take part in a range of experiments.

The three-part performance will tell the story of Niobe, one of the more tragic figures in Greek mythology, but in this case a nod to the fact the heart of the quantum computer contains the metal named after her, niobium. It will also feature a monologue from Hamlet, interspersed with terms from quantum computing.

This is the latest of Dr Kirke’s pioneering performance works, with previous productions including an opera based on the financial crisis and a piece using a cutting edge wave-testing facility as an instrument of percussion.

Geordie Rose, CTO and Founder, D-Wave Systems, said:

“D-Wave’s quantum computing technology has been investigated in many areas such as image recognition, machine learning and finance. We are excited to see Dr Kirke, a pioneer in the field of quantum physics and the arts, utilising a D-Wave 2X in his next performance. Quantum computing is positioned to have a tremendous social impact, and Dr Kirke’s work serves not only as a piece of innovative computer arts research, but also as a way of educating the public about these new types of exotic computing machines.”

Professor Daniel Lidar, Director of the USC Center for Quantum Information Science and Technology, said:

“This is an exciting time to be in the field of quantum computing. This is a field that was purely theoretical until the 1990s and now is making huge leaps forward every year. We have been researching the D-Wave machines for four years now, and have recently upgraded to the D-Wave 2X – the world’s most advanced commercially available quantum optimisation processor. We were very happy to welcome Dr Kirke on a short training residence here at the University of Southern California recently; and are excited to be collaborating with him on this performance, which we see as a great opportunity for education and public awareness.”

Since I can’t be there, I’m hoping they will be able to successfully livestream the performance. According to Kirke who very kindly responded to my query, the festival’s remote location can make livecasting a challenge. He did note that a post-performance documentary is planned and there will be footage from the performance.

He has also provided more information about the singer and the technical/computer aspects of the performance (from a July 18, 2016 email),

Juliette Pochin: I’ve worked with her before a couple of years ago. She has an amazing voice and style, is musically adventurousness (she is a music producer herself), and brings great grace and charisma to a performance. She can be heard in the Harry Potter and Lord of the Rings soundtracks and has performed at venues such as the Royal Albert Hall, Proms in the Park, and Meatloaf!

Score: The score is in 3 parts of about 5 minutes each. There is a traditional score for parts 1 and 3 that Juliette will sing from. I wrote these manually in traditional music notation. However she can sing in free time and wait for the computer to respond. It is a very dramatic score, almost operatic. The computer’s responses are based on two algorithms: a superposition chord system, and a pitch-loudness entanglement system. The superposition chord system sends a harmony problem to the D-Wave in response to Juliette’s approximate pitch amongst other elements. The D-Wave uses an 8-qubit optimizer to return potential chords. Each potential chord has an energy associated with it. In theory the lowest energy chord is that preferred by the algorithm. However in the performance I will combine the chord solutions to create superposition chords. These are chords which represent, in a very loose way, the superposed solutions which existed in the D-Wave before collapse of the qubits. Technically they are the results of multiple collapses, but metaphorically I can’t think of a more beautiful representation of superposition: chords. These will accompany Juliette, sometimes clashing with her. Sometimes giving way to her.

The second subsystem generates non-pitched noises of different lengths, roughnesses and loudness. These are responses to Juliette, but also a result of a simple D-Wave entanglement. We know the D-Wave can entangle in 8-qubit groups. I send a binary representation of the Juliette’s loudness to 4 qubits and one of approximate pitch to another 4, then entangle the two. The chosen entanglement weights are selected for their variety of solutions amongst the qubits, rather than by a particular musical logic. So the non-pitched subsystem is more of a sonification of entanglement than a musical algorithm.

Thank you Dr. Kirke for a fascinating technical description and for a description of Juliette Pochin that makes one long to hear her in performance.

For anyone who’s thinking of attending the performance or curious, you can find out more about the Port Eliot festival here, Juliette Pochin here, and Alexis Kirke here.

For anyone wondering about data sonficiatiion, I also have a Feb. 7, 2014 post featuring a data sonification project by Dr. Domenico Vicinanza which includes a sound clip of his Voyager 1 & 2 spacecraft duet.

Music videos for teaching science and a Baba Brinkman update

I have two news bits concerning science and music.

Music videos and science education

Researchers in the US and New Zealand have published a study on how effective music videos are for teaching science. Hint: there are advantages but don’t expect perfection. From a May 25, 2016 news item on ScienceDaily,

Does “edutainment” such as content-rich music videos have any place in the rapidly changing landscape of science education? A new study indicates that students can indeed learn serious science content from such videos.

The study, titled ‘Leveraging the power of music to improve science education’ and published by International Journal of Science Education, examined over 1,000 students in a three-part experiment, comparing learners’ understanding and engagement in response to 24 musical and non-musical science videos.

A May 25, 2016 Taylor & Francis (publishers) press release, which originated the news item, quickly gets to the point,

The central findings were that (1) across ages and genders, K-16 students who viewed music videos improved their scores on quizzes about content covered in the videos, and (2) students preferred music videos to non-musical videos covering equivalent content.  Additionally, the results hinted that videos with music might lead to superior long-term retention of the content.

“We tested most of these students outside of their normal classrooms,” commented lead author Greg Crowther, Ph.D., a lecturer at the University of Washington.  “The students were not forced by their teachers to watch these videos, and they didn’t have the spectre of a low course grade hanging over their heads.  Yet they clearly absorbed important information, which highlights the great potential of music to deliver key content in an appealing package.”

The study was inspired by the classroom experiences of Crowther and co-author Tom McFadden [emphasis mine], who teaches science at the Nueva School in California.  “Tom and I, along with many others, write songs for and with our students, and we’ve had a lot of fun doing that,” said Crowther.  “But rather than just assuming that this works, we wanted to see whether we could document learning gains in an objective way.”

The findings of this study have implications for teacher practitioners, policy-makers and researchers who are looking for innovative ways to improve science education.  “Music will always be a supplement to, rather than a substitute for, more traditional forms of teaching,” said Crowther.  “But teachers who want to connect with their students through music now have some additional data on their side.”

The paper is quite interesting (two of the studies were run in the US and one in New Zealand) and I notice that Don McFadden of the Science Rap Academy is one of the authors (more about him later); here’s a link to and a citation for the paper,

Leveraging the power of music to improve science education by Gregory J. Crowther, Tom McFadden, Jean S. Fleming, & Katie Davis.  International Journal of Science Education
Volume 38, Issue 1, 2016 pages 73-95. DOI: 10.1080/09500693.2015.1126001 Published online: 18 Jan 2016

This paper is open access. As I noted earlier, the research is promising but science music videos are not the answer to all science education woes.

One of my more recent pieces featuring Tom McFadden and his Science Rap Academy is this April 21, 2015 posting. The 2016 edition of the academy started in January 2016 according to David Bruggeman’s Jan. 27, 2016 posting on his Pasco Phronesis blog. You can find the Science Rap Academy’s YouTube channel here and the playlist here.

Canadian science rappers and musicians

I promised the latest about Baba Brinkman and here it is (from a May 14, 2016 notice received via email,

Not many people know this, but Dylan Thomas [legendary Welsh poet] was one of my early influences as a poet and one of the reasons I was inspired to pursue versification as a career. Well now Literature Wales has commissioned me to write and record a new rap/poem in celebration of Dylan Day 2016 (today [May 14, 20160) which I dutifully did. You can watch the video here to check out what a hip-hop flow and a Thomas poem have in common.

In other news, I’ll be performing a couple of one-off show over the next few weeks. Rap Guide to Religion is on at NECSS in New York on May 15 (tomorrow) [Note: Link removed as the event date has now been passed] and Rap Guide to Evolution is at the Reason Rally in DC June 2nd [2016]. I’m also continuing with the off-Broadway run of Rap Guide to Climate Chaos, recording the climate chaos album and looking to my next round of writing and touring, so if you have ideas about venues I could play please send me a note.

You can find out more about Baba Brinkman (a Canadian rapper who has written and  performed many science raps and lives in New York) here.

There’s another Canadian who produces musical science videos, Tim Blais (physicist and Montréaler) who was most recently featured here in a Feb. 12, 2016 posting. You can find a selection of Blais’ videos on his A Capella Science channel on YouTube.

Interconnected performance analysis music hub shared by McGill University and Université de Montréal announced* June 2, 2016

The press releases promise the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) will shape the future of music. The CIRMMT June 2, 2016 (Future of Music) press release (received via email) describes the funding support,

A significant investment of public and private support that will redefine the future of music research in Canada by transforming the way musicians compose,listen and perform music.

The Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), the Schulich School of Music of McGill University and the Faculty of Music of l’Université de Montréal are creating a unique interconnected research hub that will quite literally link two exceptional spaces at two of Canada’s most renowned music schools.

Imagine a new space and community where musicians, scientists and engineers join forces to gain a better understanding of the influence that music plays on individuals as well as their physical, psychological and even neurological conditions; experience the acoustics of an 18th century Viennese concert hall created with the touch of a fingertip; or attending an orchestral performance in one concert hall but hearing and seeing musicians performing from a completely different venue across town… All this and more will soon become possible here in Montreal!

The combination of public and private gifts will broaden our musical horizons exponentially thanks to significant investment for music research in Canada. With over $14.5 million in grants from the Canada Foundation for Innovation (CFI), the Government of Quebec and the Fonds de Recherche du Québec (FRQ), and a substantial contribution of an additional $2.5million gift from private philanthropy.

“We are grateful for this exceptional investment in music research from both the federal and provincial governments and from our generous donors,” says McGill Principal Suzanne Fortier. “This will further the collaboration between these two outstanding music schools and support the training of the next generation of music researchers and artists. For anyone who loves music, this is very exciting news.”

There’s not much technical detail in this one but here it is,

Digital channels coupling McGill University’s Music Multimedia Room (MMR – a large, sound-isolated performance lab) and l’Université de Montréal’s Salle Claude Champagne ([SCC -] a superb concert hall) will transform these two exceptional spaces into the world’s leading research facility for the scientific study of live performance, movement of recorded sound in space, and distributed performance (where musicians in different locations perform together).

“The interaction between scientific/technological research and artistic practice is one of the most fruitful avenues for future developments in both fields. This remarkable investment in music research is a wonderful recognition of the important contributions of the arts to Canadian society”, says Sean Ferguson, Dean of Schulich School of Music

The other CIRMMT June 2, 2016 (Collaborative hub) press  release (received via email) elaborates somewhat on the technology,

The MMR (McGill University’s Music Multimedia Room) will undergo complete renovations which include the addition of high quality variable acoustical treatment and a state-of-the-art rigging system. An active enhancement and sound spatialization system, together with stereoscopic projectors and displays, will provide virtual acoustic and immersive environments. At the SCC (l’Université de Montréal’s Salle Claude Champagne), the creation of a laboratory, a control room and a customizable rigging system will enable the installation and utilization of new research equipment’s in this acoustically-rich environment. These improvements will drastically augment the research possibilities in the hall, making it a unique hub in Canada for researchers to validate their experiments in a real concert hall.

“This infrastructure will provide exceptional spaces for performance analysis of multiple performers and audience members simultaneously, with equipment such as markerless motion-capture equipment and eye trackers. It will also connect both spaces for experimentations on distributed performances and will make possible new kinds of multimedia artworks.

The research and benefits

The research program includes looking at audio recording technologies, audio and video in immersive environments, and ultra-videoconferencing, leading to the development of new technologies for audio recording, film, television, distance education, and multi-media artworks; as well as a focus on cognition and perception in musical performance by large ensembles and on the rhythmical synchronization and sound blending of performers.

Social benefits include distance learning, videoconferencing, and improvements to the quality of both recorded music and live performance. Health benefits include improved hearing aids, noise reduction in airplanes and public spaces, and science-based music pedagogies and therapy. Economic benefits include innovations in sound recording, film and video games, and the training of highly qualified personnel across disciplines.

Amongst other activities they will be exploring data sonification as it relates to performance.

Hopefully, I’ll have more after the livestreamed press conference being held this afternoon, June 2, 2016,  (2:30 pm EST) at the CIRMMT.

*’opens’ changed to ‘announced’ on June 2, 2016 at 1335 hours PST.

ETA June 8, 2016: I did attend the press conference via livestream. There was some lovely violin played and the piece proved to be a demonstration of the work they’re hoping to expand on now that there will be a CIRMMT (pronounced kermit). There was a lot of excitement and I think that’s largely due to the number of years it’s taken to get to this point. One of the speakers reminisced about being a music student at McGill in the 1970s when they first started talking about getting a new music building.

They did get their building but have unable to complete it until these 2016 funds were awarded. Honestly, all the speakers seemed a bit giddy with delight. I wish them all congratulations!

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

NASA calling for submissions (poetry, video, art, music, etc.) for space travel

The US National Aeronautics and Space Administration (NASA) has made an open call for art works that could be part of the the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) spacecraft mission bound for Bennu (an asteroid). From a Feb. 23, 2016 NASA news release on EurekAlert,

OSIRIS-REx is scheduled to launch in September and travel to the asteroid Bennu. The #WeTheExplorers campaign invites the public to take part in this mission by expressing, through art, how the mission’s spirit of exploration is reflected in their own lives. Submitted works of art will be saved on a chip on the spacecraft. The spacecraft already carries a chip with more than 442,000 names submitted through the 2014 “Messages to Bennu” campaign.

“The development of the spacecraft and instruments has been a hugely creative process, where ultimately the canvas is the machined metal and composites preparing for launch in September,” said Jason Dworkin, OSIRIS-REx project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “It is fitting that this endeavor can inspire the public to express their creativity to be carried by OSIRIS-REx into space.”

A submission may take the form of a sketch, photograph, graphic, poem, song, short video or other creative or artistic expression that reflects what it means to be an explorer. Submissions will be accepted via Twitter and Instagram until March 20, 2016. For details on how to include your submission on the mission to Bennu, go to:

http://www.asteroidmission.org/WeTheExplorers

“Space exploration is an inherently creative activity,” said Dante Lauretta, principal investigator for OSIRIS-REx at the University of Arizona, Tucson. “We are inviting the world to join us on this great adventure by placing their art work on the OSIRIS-REx spacecraft, where it will stay in space for millennia.”

The spacecraft will voyage to the near-Earth asteroid Bennu to collect a sample of at least 60 grams (2.1 ounces) and return it to Earth for study. Scientists expect Bennu may hold clues to the origin of the solar system and the source of the water and organic molecules that may have made their way to Earth.

Goddard provides overall mission management, systems engineering and safety and mission assurance for OSIRIS-REx. The University of Arizona, Tucson leads the science team and observation planning and processing. Lockheed Martin Space Systems in Denver is building the spacecraft. OSIRIS-REx is the third mission in NASA’s New Frontiers Program. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages New Frontiers for the agency’s Science Mission Directorate in Washington.

I wonder why the Egyptian mythology as in Osiris and Bennu. For those who need a refresher on the topic, here’s more from the Osiris entry on Wikipedia (Note: Links have been removed),

Osiris (/oʊˈsaɪərᵻs/, alternatively Ausir, Asiri or Ausar, among other spellings), was an Egyptian god, usually identified as the god of the afterlife, the underworld, and the dead, but more appropriately as the god of transition, resurrection, and regeneration.

Then there’s this from the Bennu entry on Wikipedia (Note: Links have been removed),

The Bennu is an ancient Egyptian deity linked with the sun, creation, and rebirth. It may have been the inspiration for the phoenix in Greek mythology.

You can find out more about Bennu, the asteriod, on its webpage, The long Strange Trip of Bennu on the NASA website (which also features a video animation), Note: A link has been removed,

… Born from the rubble of a violent collision, hurled through space for millions of years and dismembered by the gravity of planets, asteroid Bennu had a tough life in a rough neighborhood: the early solar system. …

“We are going to Bennu because we want to know what it has witnessed over the course of its evolution,” said Edward Beshore of the University of Arizona, Deputy Principal Investigator for NASA’s asteroid-sample-return mission OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security – Regolith Explorer). The mission will be launched toward Bennu in late 2016, arrive at the asteroid in 2018, and return a sample of Bennu’s surface to Earth in 2023.

“Bennu’s experiences will tell us more about where our solar system came from and how it evolved. Like the detectives in a crime show episode, we’ll examine bits of evidence from Bennu to understand more completely the story of the solar system, which is ultimately the story of our origin.”

As for the spacecraft, you can find out more about OSIRIS-REx here.

Getting back to the artwork, Sarah Cascone has written a Feb. 22, 2016 posting for artnet news, which features the call for submissions and some work which already been submitted (Note: Links have been removed),

The near-Earth asteroid Bennu will become the first extra-terrestrial art gallery, with the space agency inviting the public to contribute works of art that are inspired by the spirit of exploration.

The project will follow other important moments in space art history, which include work by Invader traveling aboard the International Space Station, conceptual artwork on the UKube-1 satellite, and even a bonsai tree launched into space.

Here’s a selection of the artworks being embedded in Cascone’s posting,

Daughter’s is spacebound! Fitting tribute to a pioneering, star-loving musician @OSIRISREx

For more inspiration, check out Cascone’s Feb. 22, 2016 posting.

Good luck!