Category Archives: Music

A selection of science songs for summer

Canada’s Perimeter Institute for Theoretical Physics (PI) has compiled a list of science songs and it includes a few Canadian surprises. Here’s more from the July 21, 2016 PI notice received via email.

Ah, summer.

School’s out, the outdoors beckon, and with every passing second a 4.5-billion-year-old nuclear fireball fuses 620 million tons of hydrogen so brightly you’ve gotta wear shades.

Who says you have to stop learning science over the summer?

All you need is the right soundtrack to your next road trip, backyard barbeque, or day at the beach.

Did we miss your favourite science song? Tweet us @Perimeter with the hashtag #SciencePlaylist.

You can find the list and accompanying videos on The Ultimate Science Playlist webpage on the PI website. Here are a few samples,

“History of Everything” – Barenaked Ladies (The Big Bang Theory theme)

You probably know this one as the theme song of The Big Bang Theory. But here’s something you might not know. The tune began as an improvised ditty Barenaked Ladies’ singer Ed Robertson performed one night in Los Angeles after reading Simon Singh’s book Big Bang: The Most Important Scientific Discovery of All Time and Why You Need to Know About It. Lo and behold, in the audience that night were Chuck Lorre and Bill Prady, creators of The Big Bang Theory. The rest is history (of everything).

“Bohemian Gravity” – A Capella Science (Tim Blais)

Tim Blais, the one-man choir behind A Capella Science, is a master at conveying complex science in fun musical parodies. “Bohemian Gravity” is his most famous, but be sure to also check out our collaboration with him about gravitational waves, “LIGO: Feel That Space.”

“NaCl” – Kate and Anna McGarrigle

“NaCl” is a romantic tale of the courtship of a chlorine atom and a sodium atom, who marry and become sodium chloride. “Think of the love you eat,” sings Kate McGarrigle, “when you salt your meat.”

This is just a sampling. At this point, there are 15 science songs on the webpage. Surprisingly, rap is not represented. One other note, you’ll notice all of my samples are Canadian. (Sadly, I had other videos as well but every time I saved a draft I lost at least half or more. It seems the maximum allowed to me is three.).

Here are the others I wanted to include:

“Mandelbrot Set” – Jonathan Coulton

Singer-songwriter Jonathan Coulton (JoCo, to fans) is arguably the patron saint of geek-pop, having penned the uber-catchy credits songs of the Portal games, as well as this loving tribute to a particular set of complex numbers that has a highly convoluted fractal boundary when plotted.

“Higgs Boson Sonification” – Traq 

CERN physicist Piotr Traczyk (a.k.a. Traq) “sonified” data from the experiment that uncovered the Higgs boson, turning the discovery into a high-energy metal riff.

“Why Does the Sun Shine?” – They Might Be Giants

Choosing just one song for this playlist by They Might Be Giants is a tricky task, since They Definitely Are Nerdy. But this one celebrates physics, chemistry, and astronomy while also being absurdly catchy, so it made the list. Honourable mention goes to their entire album for kids, Here Comes Science.

In any event, the PI list is a great introduction to science songs and The Ultimate Science Playlist includes embedded videos for all 15 of the songs selected so far. Happy Summer!

Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin

The upcoming performance of a quantum computer built by D-Wave Systems (a Canadian company) and Welsh mezzo soprano Juliette Pochin is the première of “Superposition” by Alexis Kirke. A July 13, 2016 news item on phys.org provides more detail,

What happens when you combine the pure tones of an internationally renowned mezzo soprano and the complex technology of a $15million quantum supercomputer?

The answer will be exclusively revealed to audiences at the Port Eliot Festival [Cornwall, UK] when Superposition, created by Plymouth University composer Alexis Kirke, receives its world premiere later this summer.

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A July 13, 2016 Plymouth University press release, which originated the news item, expands on the theme,

Combining the arts and sciences, as Dr Kirke has done with many of his previous works, the 15-minute piece will begin dark and mysterious with celebrated performer Juliette Pochin singing a low-pitched slow theme.

But gradually the quiet sounds of electronic ambience will emerge over or beneath her voice, as the sounds of her singing are picked up by a microphone and sent over the internet to the D-Wave quantum computer at the University of Southern California.

It then reacts with behaviours in the quantum realm that are turned into sounds back in the performance venue, the Round Room at Port Eliot, creating a unique and ground-breaking duet.

And when the singer ends, the quantum processes are left to slowly fade away naturally, making their final sounds as the lights go to black.

Dr Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, said:

“There are only a handful of these computers accessible in the world, and this is the first time one has been used as part of a creative performance. So while it is a great privilege to be able to put this together, it is an incredibly complex area of computing and science and it has taken almost two years to get to this stage. For most people, this will be the first time they have seen a quantum computer in action and I hope it will give them a better understanding of how it works in a creative and innovative way.”

Plymouth University is the official Creative and Cultural Partner of the Port Eliot Festival, taking place in South East Cornwall from July 28 to 31, 2016 [emphasis mine].

And Superposition will be one of a number of showcases of University talent and expertise as part of the first Port Eliot Science Lab. Being staged in the Round Room at Port Eliot, it will give festival goers the chance to explore science, see performances and take part in a range of experiments.

The three-part performance will tell the story of Niobe, one of the more tragic figures in Greek mythology, but in this case a nod to the fact the heart of the quantum computer contains the metal named after her, niobium. It will also feature a monologue from Hamlet, interspersed with terms from quantum computing.

This is the latest of Dr Kirke’s pioneering performance works, with previous productions including an opera based on the financial crisis and a piece using a cutting edge wave-testing facility as an instrument of percussion.

Geordie Rose, CTO and Founder, D-Wave Systems, said:

“D-Wave’s quantum computing technology has been investigated in many areas such as image recognition, machine learning and finance. We are excited to see Dr Kirke, a pioneer in the field of quantum physics and the arts, utilising a D-Wave 2X in his next performance. Quantum computing is positioned to have a tremendous social impact, and Dr Kirke’s work serves not only as a piece of innovative computer arts research, but also as a way of educating the public about these new types of exotic computing machines.”

Professor Daniel Lidar, Director of the USC Center for Quantum Information Science and Technology, said:

“This is an exciting time to be in the field of quantum computing. This is a field that was purely theoretical until the 1990s and now is making huge leaps forward every year. We have been researching the D-Wave machines for four years now, and have recently upgraded to the D-Wave 2X – the world’s most advanced commercially available quantum optimisation processor. We were very happy to welcome Dr Kirke on a short training residence here at the University of Southern California recently; and are excited to be collaborating with him on this performance, which we see as a great opportunity for education and public awareness.”

Since I can’t be there, I’m hoping they will be able to successfully livestream the performance. According to Kirke who very kindly responded to my query, the festival’s remote location can make livecasting a challenge. He did note that a post-performance documentary is planned and there will be footage from the performance.

He has also provided more information about the singer and the technical/computer aspects of the performance (from a July 18, 2016 email),

Juliette Pochin: I’ve worked with her before a couple of years ago. She has an amazing voice and style, is musically adventurousness (she is a music producer herself), and brings great grace and charisma to a performance. She can be heard in the Harry Potter and Lord of the Rings soundtracks and has performed at venues such as the Royal Albert Hall, Proms in the Park, and Meatloaf!

Score: The score is in 3 parts of about 5 minutes each. There is a traditional score for parts 1 and 3 that Juliette will sing from. I wrote these manually in traditional music notation. However she can sing in free time and wait for the computer to respond. It is a very dramatic score, almost operatic. The computer’s responses are based on two algorithms: a superposition chord system, and a pitch-loudness entanglement system. The superposition chord system sends a harmony problem to the D-Wave in response to Juliette’s approximate pitch amongst other elements. The D-Wave uses an 8-qubit optimizer to return potential chords. Each potential chord has an energy associated with it. In theory the lowest energy chord is that preferred by the algorithm. However in the performance I will combine the chord solutions to create superposition chords. These are chords which represent, in a very loose way, the superposed solutions which existed in the D-Wave before collapse of the qubits. Technically they are the results of multiple collapses, but metaphorically I can’t think of a more beautiful representation of superposition: chords. These will accompany Juliette, sometimes clashing with her. Sometimes giving way to her.

The second subsystem generates non-pitched noises of different lengths, roughnesses and loudness. These are responses to Juliette, but also a result of a simple D-Wave entanglement. We know the D-Wave can entangle in 8-qubit groups. I send a binary representation of the Juliette’s loudness to 4 qubits and one of approximate pitch to another 4, then entangle the two. The chosen entanglement weights are selected for their variety of solutions amongst the qubits, rather than by a particular musical logic. So the non-pitched subsystem is more of a sonification of entanglement than a musical algorithm.

Thank you Dr. Kirke for a fascinating technical description and for a description of Juliette Pochin that makes one long to hear her in performance.

For anyone who’s thinking of attending the performance or curious, you can find out more about the Port Eliot festival here, Juliette Pochin here, and Alexis Kirke here.

For anyone wondering about data sonficiatiion, I also have a Feb. 7, 2014 post featuring a data sonification project by Dr. Domenico Vicinanza which includes a sound clip of his Voyager 1 & 2 spacecraft duet.

Music videos for teaching science and a Baba Brinkman update

I have two news bits concerning science and music.

Music videos and science education

Researchers in the US and New Zealand have published a study on how effective music videos are for teaching science. Hint: there are advantages but don’t expect perfection. From a May 25, 2016 news item on ScienceDaily,

Does “edutainment” such as content-rich music videos have any place in the rapidly changing landscape of science education? A new study indicates that students can indeed learn serious science content from such videos.

The study, titled ‘Leveraging the power of music to improve science education’ and published by International Journal of Science Education, examined over 1,000 students in a three-part experiment, comparing learners’ understanding and engagement in response to 24 musical and non-musical science videos.

A May 25, 2016 Taylor & Francis (publishers) press release, which originated the news item, quickly gets to the point,

The central findings were that (1) across ages and genders, K-16 students who viewed music videos improved their scores on quizzes about content covered in the videos, and (2) students preferred music videos to non-musical videos covering equivalent content.  Additionally, the results hinted that videos with music might lead to superior long-term retention of the content.

“We tested most of these students outside of their normal classrooms,” commented lead author Greg Crowther, Ph.D., a lecturer at the University of Washington.  “The students were not forced by their teachers to watch these videos, and they didn’t have the spectre of a low course grade hanging over their heads.  Yet they clearly absorbed important information, which highlights the great potential of music to deliver key content in an appealing package.”

The study was inspired by the classroom experiences of Crowther and co-author Tom McFadden [emphasis mine], who teaches science at the Nueva School in California.  “Tom and I, along with many others, write songs for and with our students, and we’ve had a lot of fun doing that,” said Crowther.  “But rather than just assuming that this works, we wanted to see whether we could document learning gains in an objective way.”

The findings of this study have implications for teacher practitioners, policy-makers and researchers who are looking for innovative ways to improve science education.  “Music will always be a supplement to, rather than a substitute for, more traditional forms of teaching,” said Crowther.  “But teachers who want to connect with their students through music now have some additional data on their side.”

The paper is quite interesting (two of the studies were run in the US and one in New Zealand) and I notice that Don McFadden of the Science Rap Academy is one of the authors (more about him later); here’s a link to and a citation for the paper,

Leveraging the power of music to improve science education by Gregory J. Crowther, Tom McFadden, Jean S. Fleming, & Katie Davis.  International Journal of Science Education
Volume 38, Issue 1, 2016 pages 73-95. DOI: 10.1080/09500693.2015.1126001 Published online: 18 Jan 2016

This paper is open access. As I noted earlier, the research is promising but science music videos are not the answer to all science education woes.

One of my more recent pieces featuring Tom McFadden and his Science Rap Academy is this April 21, 2015 posting. The 2016 edition of the academy started in January 2016 according to David Bruggeman’s Jan. 27, 2016 posting on his Pasco Phronesis blog. You can find the Science Rap Academy’s YouTube channel here and the playlist here.

Canadian science rappers and musicians

I promised the latest about Baba Brinkman and here it is (from a May 14, 2016 notice received via email,

Not many people know this, but Dylan Thomas [legendary Welsh poet] was one of my early influences as a poet and one of the reasons I was inspired to pursue versification as a career. Well now Literature Wales has commissioned me to write and record a new rap/poem in celebration of Dylan Day 2016 (today [May 14, 20160) which I dutifully did. You can watch the video here to check out what a hip-hop flow and a Thomas poem have in common.

In other news, I’ll be performing a couple of one-off show over the next few weeks. Rap Guide to Religion is on at NECSS in New York on May 15 (tomorrow) [Note: Link removed as the event date has now been passed] and Rap Guide to Evolution is at the Reason Rally in DC June 2nd [2016]. I’m also continuing with the off-Broadway run of Rap Guide to Climate Chaos, recording the climate chaos album and looking to my next round of writing and touring, so if you have ideas about venues I could play please send me a note.

You can find out more about Baba Brinkman (a Canadian rapper who has written and  performed many science raps and lives in New York) here.

There’s another Canadian who produces musical science videos, Tim Blais (physicist and Montréaler) who was most recently featured here in a Feb. 12, 2016 posting. You can find a selection of Blais’ videos on his A Capella Science channel on YouTube.

Interconnected performance analysis music hub shared by McGill University and Université de Montréal announced* June 2, 2016

The press releases promise the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) will shape the future of music. The CIRMMT June 2, 2016 (Future of Music) press release (received via email) describes the funding support,

A significant investment of public and private support that will redefine the future of music research in Canada by transforming the way musicians compose,listen and perform music.

The Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), the Schulich School of Music of McGill University and the Faculty of Music of l’Université de Montréal are creating a unique interconnected research hub that will quite literally link two exceptional spaces at two of Canada’s most renowned music schools.

Imagine a new space and community where musicians, scientists and engineers join forces to gain a better understanding of the influence that music plays on individuals as well as their physical, psychological and even neurological conditions; experience the acoustics of an 18th century Viennese concert hall created with the touch of a fingertip; or attending an orchestral performance in one concert hall but hearing and seeing musicians performing from a completely different venue across town… All this and more will soon become possible here in Montreal!

The combination of public and private gifts will broaden our musical horizons exponentially thanks to significant investment for music research in Canada. With over $14.5 million in grants from the Canada Foundation for Innovation (CFI), the Government of Quebec and the Fonds de Recherche du Québec (FRQ), and a substantial contribution of an additional $2.5million gift from private philanthropy.

“We are grateful for this exceptional investment in music research from both the federal and provincial governments and from our generous donors,” says McGill Principal Suzanne Fortier. “This will further the collaboration between these two outstanding music schools and support the training of the next generation of music researchers and artists. For anyone who loves music, this is very exciting news.”

There’s not much technical detail in this one but here it is,

Digital channels coupling McGill University’s Music Multimedia Room (MMR – a large, sound-isolated performance lab) and l’Université de Montréal’s Salle Claude Champagne ([SCC -] a superb concert hall) will transform these two exceptional spaces into the world’s leading research facility for the scientific study of live performance, movement of recorded sound in space, and distributed performance (where musicians in different locations perform together).

“The interaction between scientific/technological research and artistic practice is one of the most fruitful avenues for future developments in both fields. This remarkable investment in music research is a wonderful recognition of the important contributions of the arts to Canadian society”, says Sean Ferguson, Dean of Schulich School of Music

The other CIRMMT June 2, 2016 (Collaborative hub) press  release (received via email) elaborates somewhat on the technology,

The MMR (McGill University’s Music Multimedia Room) will undergo complete renovations which include the addition of high quality variable acoustical treatment and a state-of-the-art rigging system. An active enhancement and sound spatialization system, together with stereoscopic projectors and displays, will provide virtual acoustic and immersive environments. At the SCC (l’Université de Montréal’s Salle Claude Champagne), the creation of a laboratory, a control room and a customizable rigging system will enable the installation and utilization of new research equipment’s in this acoustically-rich environment. These improvements will drastically augment the research possibilities in the hall, making it a unique hub in Canada for researchers to validate their experiments in a real concert hall.

“This infrastructure will provide exceptional spaces for performance analysis of multiple performers and audience members simultaneously, with equipment such as markerless motion-capture equipment and eye trackers. It will also connect both spaces for experimentations on distributed performances and will make possible new kinds of multimedia artworks.

The research and benefits

The research program includes looking at audio recording technologies, audio and video in immersive environments, and ultra-videoconferencing, leading to the development of new technologies for audio recording, film, television, distance education, and multi-media artworks; as well as a focus on cognition and perception in musical performance by large ensembles and on the rhythmical synchronization and sound blending of performers.

Social benefits include distance learning, videoconferencing, and improvements to the quality of both recorded music and live performance. Health benefits include improved hearing aids, noise reduction in airplanes and public spaces, and science-based music pedagogies and therapy. Economic benefits include innovations in sound recording, film and video games, and the training of highly qualified personnel across disciplines.

Amongst other activities they will be exploring data sonification as it relates to performance.

Hopefully, I’ll have more after the livestreamed press conference being held this afternoon, June 2, 2016,  (2:30 pm EST) at the CIRMMT.

*’opens’ changed to ‘announced’ on June 2, 2016 at 1335 hours PST.

ETA June 8, 2016: I did attend the press conference via livestream. There was some lovely violin played and the piece proved to be a demonstration of the work they’re hoping to expand on now that there will be a CIRMMT (pronounced kermit). There was a lot of excitement and I think that’s largely due to the number of years it’s taken to get to this point. One of the speakers reminisced about being a music student at McGill in the 1970s when they first started talking about getting a new music building.

They did get their building but have unable to complete it until these 2016 funds were awarded. Honestly, all the speakers seemed a bit giddy with delight. I wish them all congratulations!

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

NASA calling for submissions (poetry, video, art, music, etc.) for space travel

The US National Aeronautics and Space Administration (NASA) has made an open call for art works that could be part of the the Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) spacecraft mission bound for Bennu (an asteroid). From a Feb. 23, 2016 NASA news release on EurekAlert,

OSIRIS-REx is scheduled to launch in September and travel to the asteroid Bennu. The #WeTheExplorers campaign invites the public to take part in this mission by expressing, through art, how the mission’s spirit of exploration is reflected in their own lives. Submitted works of art will be saved on a chip on the spacecraft. The spacecraft already carries a chip with more than 442,000 names submitted through the 2014 “Messages to Bennu” campaign.

“The development of the spacecraft and instruments has been a hugely creative process, where ultimately the canvas is the machined metal and composites preparing for launch in September,” said Jason Dworkin, OSIRIS-REx project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “It is fitting that this endeavor can inspire the public to express their creativity to be carried by OSIRIS-REx into space.”

A submission may take the form of a sketch, photograph, graphic, poem, song, short video or other creative or artistic expression that reflects what it means to be an explorer. Submissions will be accepted via Twitter and Instagram until March 20, 2016. For details on how to include your submission on the mission to Bennu, go to:

http://www.asteroidmission.org/WeTheExplorers

“Space exploration is an inherently creative activity,” said Dante Lauretta, principal investigator for OSIRIS-REx at the University of Arizona, Tucson. “We are inviting the world to join us on this great adventure by placing their art work on the OSIRIS-REx spacecraft, where it will stay in space for millennia.”

The spacecraft will voyage to the near-Earth asteroid Bennu to collect a sample of at least 60 grams (2.1 ounces) and return it to Earth for study. Scientists expect Bennu may hold clues to the origin of the solar system and the source of the water and organic molecules that may have made their way to Earth.

Goddard provides overall mission management, systems engineering and safety and mission assurance for OSIRIS-REx. The University of Arizona, Tucson leads the science team and observation planning and processing. Lockheed Martin Space Systems in Denver is building the spacecraft. OSIRIS-REx is the third mission in NASA’s New Frontiers Program. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages New Frontiers for the agency’s Science Mission Directorate in Washington.

I wonder why the Egyptian mythology as in Osiris and Bennu. For those who need a refresher on the topic, here’s more from the Osiris entry on Wikipedia (Note: Links have been removed),

Osiris (/oʊˈsaɪərᵻs/, alternatively Ausir, Asiri or Ausar, among other spellings), was an Egyptian god, usually identified as the god of the afterlife, the underworld, and the dead, but more appropriately as the god of transition, resurrection, and regeneration.

Then there’s this from the Bennu entry on Wikipedia (Note: Links have been removed),

The Bennu is an ancient Egyptian deity linked with the sun, creation, and rebirth. It may have been the inspiration for the phoenix in Greek mythology.

You can find out more about Bennu, the asteriod, on its webpage, The long Strange Trip of Bennu on the NASA website (which also features a video animation), Note: A link has been removed,

… Born from the rubble of a violent collision, hurled through space for millions of years and dismembered by the gravity of planets, asteroid Bennu had a tough life in a rough neighborhood: the early solar system. …

“We are going to Bennu because we want to know what it has witnessed over the course of its evolution,” said Edward Beshore of the University of Arizona, Deputy Principal Investigator for NASA’s asteroid-sample-return mission OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security – Regolith Explorer). The mission will be launched toward Bennu in late 2016, arrive at the asteroid in 2018, and return a sample of Bennu’s surface to Earth in 2023.

“Bennu’s experiences will tell us more about where our solar system came from and how it evolved. Like the detectives in a crime show episode, we’ll examine bits of evidence from Bennu to understand more completely the story of the solar system, which is ultimately the story of our origin.”

As for the spacecraft, you can find out more about OSIRIS-REx here.

Getting back to the artwork, Sarah Cascone has written a Feb. 22, 2016 posting for artnet news, which features the call for submissions and some work which already been submitted (Note: Links have been removed),

The near-Earth asteroid Bennu will become the first extra-terrestrial art gallery, with the space agency inviting the public to contribute works of art that are inspired by the spirit of exploration.

The project will follow other important moments in space art history, which include work by Invader traveling aboard the International Space Station, conceptual artwork on the UKube-1 satellite, and even a bonsai tree launched into space.

Here’s a selection of the artworks being embedded in Cascone’s posting,

Daughter’s is spacebound! Fitting tribute to a pioneering, star-loving musician @OSIRISREx

For more inspiration, check out Cascone’s Feb. 22, 2016 posting.

Good luck!

Afrofuturism in the UK’s Guardian newspaper and as a Future Tense Dec. 2015 event

My introduction to the term, Afrofuturism was in a March 11, 2015 posting by Jessica Bland for the Guardian in the Technology/Political Science section. It was written on the occasion of a then upcoming FutureFest event,

This is unapologetically connected to FutureFest, the festival Nesta (where I work) is holding this weekend in London Bridge. These thoughts represent the ideas that piqued my interest while curating talks and exhibits based on the thought experiment of a future African city-superpower. George Clinton, Spoek Mathambo, Tegan Bristow and Fabian-Carlos Guhl (from Ampion Venture Bus) will be speaking during the weekend. Thomas Aquilina is displaying photographs from his trip and the architects of the Lagos 2060 project will take part in a debate on whether their fiction can lead to a different kind of future.

In anticipation of the March 2015 FutureFest event, Bland had  written a roundup piece about “New sounds from South Africa and Nigeria’s urban science fiction [that] could change the future of technology and the city.” Here are some excerpts from her piece (Note: Links have been removed),

Strong stories or visions of the future stick around. The 1920s sci-fi fantasy of a jetpack commute still pops up in discussions about the future of technology, not to mention as an option on the Citymapper travel app. By co-opting or creating new visions of the future, it seems possible to influence the development of new products and services – from consumer tech to urban infrastructure. A new generation of African artists is taking over the mantle of Afrofuturist arts from a US-centred crowd. They could bring a welcome change to how technology is developed in the region, as well as a challenge to the dominance of imported plans for urban development.

Last Thursday’s London gig from Fantasma was sweaty and boisterous. It was also very different from the remix of Joy Division’s She’s Lost Control that brought front man Spoek Mathambo to the attention of a global audience a couple of years ago. Fantasma is a group of South African musicians with different backgrounds. Guitarist Bhekisenzo Cele started the gig with three of his own songs, introducing the traditional Zulu maskandi music that they went on to mix with shangaan electro, hiphop, punk, electronica and everything in between.

The gig had a buzz about it. But the performance was from a new collective trying things out; it wasn’t as genre-smashing as expected. And expectations ride high for Spoek. In 2011, he titled a collection from his back catalogue ‘Beyond Afrofuturism’. He took on, at least in name, a whole Afro-American cultural movement: embodied by musicians like Sun Ra, George Clinton and Drexciya. A previous post on this blog by Chardine Taylor-Stone describes the roots of Afrofuturism in science fiction that centres on space travel and human enhancement. But she goes on to say: “Afrofuturism also goes beyond spaceships, androids and aliens, and encompasses African mythology and cosmology with an aim to connect those from across the Black Diaspora to their forgotten African ancestry.” Spoek shares what he calls a cultural lineage with this movement. But he is not Afro-American. He also shares a cultural lineage with the sounds of South African musicians he grew up listening to.

Other forms of art are taking an increasingly activist role in the future of technology. Lydia Nicholas’s description of the relationship between Douglas Adam’s fictional Hitchhiker’s Guide and the real life development of the iPad shows how science fiction can effortlessly influence the development of new technology.

The science fiction collection Lagos 2060 is a more purposeful intervention. Published in 2013, it speculates about what it will be like to live in Lagos 100 years after Nigeria gained independence from the UK. It was born out of a creative writing workshop initiated by DADA books in Lagos. Foundation director of DADA, Ayodele Arigbabu, described the collection and other similar video and visual art work (in an email): “Far more than aesthetic indulgence, these renditions are a calibration of the changes deemed necessary in today’s political, technical and cultural infrastructure.”

Bland also explores a history of this movement,

Gaston Berger was the Senegalese founder of the academic journal Prospectiv in 1957. To many, he was the first futurist, or at least one of the first people to describe themselves as one. He founded promotes the practice of playing out the human consequences of today’s action. This is about avoiding a fatalistic approach to the future: about being proactive and provoking change, as much as anticipating it.

Berger’s early work spawned a generation, and then another and another, of professional futurists. They work in different ways and different places. Some are in government, enticing and frightening politicians with the prospect of a different transport system, healthcare sector or national security regime. Some are consultants to large companies, offering advice on the way that trends like 3D printing or flying robots will change their sector. An article from 1996 does a good job of summarising the principles of this movement: don’t act like an ostrich and ignore the future by putting your head in the sand; don’t act like a fireman and just respond to threats to your future; and don’t focus just on insurance against for the future.

Bland has written an interesting and sprawling piece, which in some way reflects the subject. Africa is a huge and sprawling continent.

Slate, a US online magazine, is hosting along with New America and Arizona State University a Future Tense event on Afrofuturism but this seems to be quite US-centric. From the Future Tense Afrofuturism event webpage on the Slate website (Note: Links have been removed),

Future Tense is hosting a conversation about Afrofuturism in New York City on December 3rd, 2015 from 6:30-8:30 p.m.

Afrofuturism emphasizes the intersection of black cultures with questions of imagination, liberation, and technology. Rooted in works like those of science fiction author Octavia Butler, avant-garde jazz legend Sun Ra, and George Clinton, Afrofuturism explores concepts of race, space and time in order to ask the existential question posed by critic Mark Dery: “Can a community whose past has been deliberately erased imagine possible futures?”

Will the alternative futures and realities Afrofuturism describes transform and reshape the concept of black identity? Join Future Tense for a discussion on Afrofuturism and its unique vantage on the challenges faced by black Americans and others throughout the African diaspora.

During the event, enjoy an Afrofuturist inspired drink from 67 Orange Street. Follow the discussion online using #Afrofuturism and by following @NewAmericaNYC and @FutureTenseNow.

Click here to RSVP. Space is limited so register now!

PARTICIPANTS

Michael Bennett
Principal Investigator, School for the Future of Innovation in Society, Arizona State University
@MGBennett

Ytasha Womack
Author, Afrofuturism: The World of Black Sci-Fi and Fantasy Culture and Post Black: How A New Generation is Redefining African American Identity
@ytashawomack

Juliana Huxtable
DJ and Artist
@HUXTABLEJULIANA

Walé Oyéjidé
Designer and Creative Director, Ikire Jones
@IkireJones

Aisha Harris
Staff writer, Slate
@craftingmystyle

It seems we have one word, Afrofuturism, and two definitions. One where Africa is referenced and one where African-American experience is referenced.

For anyone curious about Nesta, where Jessica Bland works and the Future Fest host (from its Wikipedia entry),

Nesta (formerly NESTA, National Endowment for Science, Technology and the Arts) is an independent charity that works to increase the innovation capacity of the UK.

The organisation acts through a combination of practical programmes, investment, policy and research, and the formation of partnerships to promote innovation across a broad range of sectors.

That’s it for now.

“Off The Top” is a science/comedy hour Sept. 9, 2015 at Vancouver’s (Canada) China Cloud

Baba Brinkman, a Canadian-born rapper who’s made a bit of a career in science circles and has been featured here many times for the ‘Rap Guide to Evolution’ and other pieces, will be performing in Vancouver on Sept. 9, 2015 at The China Cloud (524 Main Street) Doors 7:30pm, showtime 8pm, $15 cover.

It’s actually a two-part performance according to the Sept. 9, 2015 event page on Baba Brinkman’s website,

First: “Off The Top” is a science/comedy hour co-hosted by Baba and Heather [Berlin], exploring the neuroscience of improvisation and humour, and the odd-couple mash-up of science and rap in their marriage. …

Second: After an intermission, Baba will perform his new rap/science/comedy show ”Rap Guide to Climate Chaos”, which explores the science and politics of global warming.

Here’s more from the Off The Top page on Baba Brinkman’s website,

Science rapper Baba Brinkman (Rap Guide to Evolution) teams up with neuroscientist Dr. Heather Berlin to explore the brain basis of improvisation. What’s going on “under the hood” when a comedian or musician improvises? Why are the spontaneous moments of life always the most memorable? Does anything actually rhyme with Dorsolateral Prefrontal Cortex?

As for the Rap Guide to Climate Chaos, from the its webpage on Baba Brinkman’s website,

Fringe First Award Winner Baba Brinkman (Rap Guide to Evolution) is the world’s first and only “peer reviewed rapper,” bringing science to the masses with his unique brand of hip-hop comedy theatrics. In “Rap Guide to Climate Chaos,” Baba breaks down the politics, economics, and science of global warming, following its surprising twists from the carbon cycle to the energy economy. If civilization is a party in full swing, are the climate cops about to pull the plug? And what happens if we just let it rage? With scientists, activists, contrarians, and the Pope adding their voices to the soundtrack, get ready for a funny and refreshing take on the world’s hottest topic.

I didn’t find much about The China Cloud but there was this January 20, 2010 article by Bob Kronbauer for vancouverisawesome.com,

Floating above Vancouver’s Chinatown rests the new studio/gallery space, The China Cloud. It is currently the home base to a handful of local bands – Analog Bell Service, No Gold, Macchu Picchu; four visual artists and comedy troupes Man Hussy and Bronx Cheer. This past Friday The China Cloud had its grand opening with an art show, some booze, and musical performances by Sun Wizard, My!Gay!Husband!, Analog Bell Service and Blue Violets. It was wall to wall people, with line-ups all night and a bit more hectic than what the artists behind the event expect it to be for future events – but what a way to step on the scene!

For anyone unfamiliar with Vancouver, The China Cloud is in an area that’s gentrifying but still retains its edgy character.

The article was well illustrated by Marcus Jolly’s photographs.

Finally, Dr. Heather Berlin was mentioned here in a March 6, 2015 post (scroll down about 75% of the way) highlighting International Women’s Day and various science communication projects including hers and Faith Salie’s, Science Goes to the Movies.

ETA Sept. 7, 2015: David Bruggeman gives a brief update on Baba Brinkman’s upcoming album release in his Sept. 5, 2015 posting on Pasco Phronesis.

Mathematics, music, art, architecture, culture: Bridges 2015

Thanks to Alex Bellos and Tash Reith-Banks for their July 30, 2015 posting on the Guardian science blog network for pointing towards the Bridges 2015 conference,

The Bridges Conference is an annual event that explores the connections between art and mathematics. Here is a selection of the work being exhibited this year, from a Pi pie which vibrates the number pi onto your hand to delicate paper structures demonstrating number sequences. This year’s conference runs until Sunday in Baltimore (Maryland, US).

To whet your appetite, here’s the Pi pie (from the Bellos/Reith-Banks posting),

Pi Pie by Evan Daniel Smith Arduino, vibration motors, tinted silicone, pie tin “This pie buzzes the number pi onto your hand. I typed pi from memory into a computer while using a program I wrote to record it and send it to motors in the pie. The placement of the vibrations on the five fingers uses the structure of the Japanese soroban abacus, and bears a resemblance to Asian hand mnemonics.” Photograph: The Bridges Organisation

Pi Pie by Evan Daniel Smith
Arduino, vibration motors, tinted silicone, pie tin
“This pie buzzes the number pi onto your hand. I typed pi from memory into a computer while using a program I wrote to record it and send it to motors in the pie. The placement of the vibrations on the five fingers uses the structure of the Japanese soroban abacus, and bears a resemblance to Asian hand mnemonics.”
Photograph: The Bridges Organisation

You can find our more about Bridges 2015 here and should you be in the vicinity of Baltimore, Maryland, as a member of the public, you are invited to view the artworks on July 31, 2015,

July 29 – August 1, 2015 (Wednesday – Saturday)
Excursion Day: Sunday, August 2
A Collaborative Effort by
The University of Baltimore and Bridges Organization

A Five-Day Conference and Excursion
Wednesday, July 29 – Saturday, August 1
(Excursion Day on Sunday, August 2)

The Bridges Baltimore Family Day on Friday afternoon July 31 will be open to the Public to visit the BB Art Exhibition and participate in a series of events such as BB Movie Festival, and a series of workshops.

I believe the conference is being held at the University of Baltimore. Presumably, that’s where you’ll find the art show, etc.