Category Archives: Music

Science and the arts: a science rap promotes civil discussion about science and religion; a science movie and a play; and a chemistry article about authenticating a Lawren Harris painting

Canadian-born rapper of science and many other topics, Baba Brinkman sent me an update about his current doings (first mentioned in an Aug. 1, 2014 posting featuring his appearances at the 2014 Edinburgh Fringe Festival, his Rap Guide to Religion being debuted at the Fringe, and his Kickstarter campaign to raise money for the creation of an animated rap album of his news Rap Guide to Religion), Note: Links have been removed,

Greetings from Edinburgh! In the past two and half weeks I’ve done fifteen performances of The Rap Guide to Religion for a steadily building audience here at the Fringe, and we recently had a whole pile of awesome reviews published, which I will excerpt below, but first a funny story.

Yesterday [August 14, 2014] BBC [British Broadcasting Corporation] Sunday Morning TV was in to film my performance. They had a scheme to send a right wing conservative Christian to the show and then film us having an argument afterwards. The man they sent certainly has the credentials. Reverend George Hargreaves is a Pentecostal Minister and former leader of the UK Christian Party, as well as a young earth creationist and strong opponent of abortion and homosexuality. He led the protests that got “Jerry Springer the Opera” shut down in London a few years back, and is on record as saying that religion is not an appropriate subject for comedy. Before he converted to Christianity, the man was also a DJ and producer of pop music for the London gay scene, interesting background.

So after an hour of cracking jokes at religion’s expense, declaring myself an unapologetic atheist, and explaining why evolutionary science gives a perfectly satisfying naturalistic account of where religion comes from, I sat down with Reverend George and was gobsmacked when he started the interview with: “I don’t know if we’re going to have anything to debate about… I LOVED your show!” We talked for half an hour with the cameras rolling and at one point George said “I don’t know what we disagree about,” so I asked him: “Do you think one of your ancestors was a fish?” He declared that statement a fishy story and denied it, and then we found much to disagree about.

I honestly thought I had written a hard-hitting, provocative and controversial show, but it turns out the religious are loving it as much as the nonbelievers – and I’m not sure how I feel about that. I asked Reverend George why he wasn’t offended, even though he’s officially against comedy that targets religion, and he told me it’s because I take the religious worldview seriously, instead of lazily dismissing it as delusional. The key word here is “lazily” rather than “delusional” because I don’t pull punches about religion being a series of delusions, but I don’t think those delusions are pointless. I think they have evolved (culturally and genetically) to solve adaptive problems in the past, and for religious people accustomed to atheists being derisive and dismissive that’s a (semi) validating perspective.

To listen to songs from The Rap Guide to Religion, you need to back my Kickstarter campaign so I can raise the money to produce a proper record. To check out what the critics here in Edinburgh have to say about my take on religion, read on. And if you want to help organize a gig somewhere, just let me know. The show is open for bookings.

On Sunday Morning [August 17, 2014 GMT] my segment with Reverend George will air on BBC One, so we’ll see what a million British people think of the debate.

All the best from the religious fringe,

Baba

Here’s a link to the BBC One Sunday Morning Live show, where hopefully you’ll be able to catch the segment featuring Baba and Reverend George Hargreaves either livestreamed or shortly thereafter.

A science movie and a science play

Onto the science movie and the play: David Bruggeman on his Pasco Phronesis blog writes about two upcoming movie biopics featuring Alan Turing and Stephen Hawking respectively, in an Aug. 8, 2014 posting. Having covered the Turing movie here (at length) in a July 22, 2014 posting here’s the new information about the Hawking movie from David’s Aug, 8, 2014 posting,

Alan Turing and Stephen Hawking are noted British scientists, well recognized for their work and for having faced significant challenges in their lives.  While they were in different fields and productive in different parts of the 20th century (Hawking is still with us), their stories will compete in movieplexes (at least in the U.S.) this November.

The Theory of Everything is scheduled for release on November 7 and focuses on the early career and life of Hawking.  He’s portrayed by Eddie Redmayne, and the film is directed by James Marsh.  Marsh has several documentaries to his credit, including the Oscar-winning Man on Wire.  Theory is the third film project on Hawking since 2004, but the first to get much attention outside of the United Kingdom (this might explain why it won’t debut in the U.K. until New Year’s Day).  It premieres at the Toronto International Film Festival next month [Sept. 2014].

David features some trailers for both movies and additional information.

Interestingly the science play focuses on the friendship between a female UK scientist and her former student, Margaret Thatcher (a UK Prime Minister). From an Aug. 13, 2014 Alice Bell posting on the Guardian science blog network (Note: Links have been removed),

Adam Ganz’s new play – The Chemistry Between Them, to be broadcast on Radio 4 this month – explores one of the most intriguing friendships in the history of science and politics: Margaret Thatcher and Dorothy Hodgkin.

As well as winning the Nobel Prize in Chemistry for her pioneering scientific work on the structures of proteins, Hodgkin was a left-wing peace campaigner who was awarded the Soviet equivalent of the Nobel Peace Prize, the Order of Lenin. Hardly Thatcher’s type, you might think. But Hodgkin was Thatcher’s tutor at university, and the relationships between science, politics and women in high office are anything but straightforward.

I spoke to Ganz about his interest in the subject, and started by asking him to tell us more about the play.

… they stayed friends throughout Dorothy’s life. Margaret Thatcher apparently had a photo of Dorothy Hodgkin in Downing Street, and they maintained a kind of warm relationship. The play happens in two timescales – one is a meeting in 1983 in Chequers where Dorothy came to plead with Margaret to take nuclear disarmament more seriously at a time when Cruise missiles and SS20s were being stationed in Europe. In fact I’ve set it – I’m not sure of the exact date – shortly after the Korean airliner was shot down, when the Russians feared Nato were possibly planning a first strike. And that is intercut with the time when Margaret is studying chemistry and looking at her journey; what she learned at Somerville, but especially what she learned from Dorothy.

Here’s a link to the BBC 4 webpage for The Chemistry Between Them. I gather the broadcast will be Weds., Aug. 20, 2014 at 1415 hours GMT.

Chemistry and authentication of a Lawren Harris painting

The final item for this posting concerns Canadian art, chemistry, and the quest to prove the authenticity of a painting. Roberta Staley, editor of Canadian Chemical News (ACCN), has written a concise technical story about David Robertson’s quest to authenticate a painting he purchased some years ago,

Fourteen years ago, David Robertson of Delta, British Columbia was holidaying in Ontario when he stopped at a small antique shop in the community of Bala, two hours north of Toronto in cottage country. An unsigned 1912 oil painting caught his attention. Thinking it evocative of a Group of Seven painting, Robertson paid the asking price of $280 and took it home to hang above his fireplace.

Roberta has very kindly made it available as a PDF: ChemistryNews_Art.Mystery.Group.7. It will also be available online at the Canadian Chemical News website soon. (It’s not in the July/August 2014 issue.)

For anyone who might recognize the topic, I wrote a sprawling five-part series (over 5000 words) on the story starting with part one. Roberta’s piece is 800 words and offers her  account of the tests for both Autumn Harbour and the authentic Harris painting, Hurdy Gurdy. I was able to attend only one of them (Autumn Harbour).

David William Robertson, Autumn Harbour’s owner has recently (I received a notice on Aug. 13, 2014) updated his website with all of the scientific material and points of authentication that he feels prove his case.

Have a very nice weekend!

British Columbia Day (in Canada) kickoff with Baba Brinkman’s Kickstarter campaign and a science rap

This year’s BC (British Columbia) Day is today, Aug. 4, 2014*. In celebration I am posting a number of fun items, all to do with science and none with nanotechnology, although one item does feature ‘nano’ in the title.

First off, BC-born, Baba Brinkman reports back from the 2014 Edinburgh Fringe Festival where he is previewing his new ‘science’ rap,

Greetings from the Edinburgh Fringe Festival! Today I performed my second Rap Guide to Religion preview at the Gilded Balloon, and this afternoon I launched my Kickstarter campaign to fund the creation of an animated rap album by the same name. I already have eight songs written and recorded, and I want to create another 6-8 for the full album, and then commission animators to produce a series of animated shorts to bring the story to life. The campaign will run for precisely 40 days and 40 nights, and I’m excited to see that we’re over $1K already, just 12 hours in!

The Rap Guide to Religion is my latest “peer reviewed rap” album and show, detailing the story of how religion and human evolution coincide. I’m summarizing work from the field of “evolutionary religious studies” in rap form both because I find it fascinating and also because I think an appreciation of how and why religion evolves can help to rebuild some burnt bridges between religious groups and between believers and nonbelievers.

You can stream three of the first eight songs from my site at music.bababrinkman.com, and all eight comprise a short “album preview” EP I put together for the fringe, which will be exclusively available to Kickstarter backers. The opening track “Religion Evolves” offers a pretty good overview of my personal perspective as well as the questions I want to explore with the record. …

Before moving on to the Kickstarter information, here’s what David Bruggeman had to say about the new work and about supporting Baba’s projects in a July 31, 2014 posting on his Pasco Phronesis blog,

… You can also listen to two tracks from the album (if you contribute, you will receive downloads of all eight tracks).  My favorite of the two is “Religion Evolves”.

The usual assortment of rewards (copies of the album, t-shirts, custom raps) is available for whatever you’d be willing to contribute.  My past experience with supporting his projects allows me to say that he will deliver.  If you want proof, look for me at 2:53 in his video for “Artificial Selection”

Baba’s Kickstarter campaign titled: The Rap Guide to Religion (Animated Rap Album) has a goal of $20,000,

An animated rap album about the evolutionary origins of religion. It’s time to eff with the ineffable!

Have you ever helped to crowdfund a rap album? How about a rap album that communicates SCIENCE? Or an ANIMATED rap album about the scientific study of RELIGION? Well, that’s what I’m working on right now, with the help of some friends.

Theologians and philosophers have sought the meaning and purpose of life for thousands of years, often finding it in religion. Then Darwin’s theory of evolution turned the world upside down. The supernatural was discarded as the source of answers to the natural world and replaced by the blind force of evolution. And now, with decades of scientific research on hand, we can finally make sense of religion using the tools of evolutionary thinking.

The field is called “Evolutionary Religious Studies” and I’m using my talent and love of rap and science to share this research with a wide audience by recording a rap album on the subject. I’m also teaming up with an amazing group of animators and illustrators led by Dave Anderson from http://bloodsausage.co.uk to create a series of animated shorts (approximately 20 minutes long in total) based on the album, so we can make the songs maximally entertaining and accessible.

There is a nine second sample of an animated music rap from the Rap Guide to Religion Album on the campaign page. Surprisngly, Baba and his colleague have not made the sample available for embedding elsewhere so you’ll have to go there to see it.

* I failed to properly schedule publication (I forgot to change the date) of this post and so it bears an Aug. 1, 2014 publication date. Today is Aug. 15, 2014.

Tim Blais and A Capella Science

Thanks to David Bruggeman’s July 16, 2014 ‘musical science’ posting on his Pasco Phronesis blog for information about another Canadian ‘science musician’. Tim Blais has been producing science music videos for almost two years now. His first video, posted on YouTube, in August 2012 featured an Adele tune ‘Rolling in the deep’ sung to lyrics featuring the Higgs Boson (‘Rolling in the Higgs’),

He shares the text of the lyrics (from http://www.youtube.com/watch?v=VtItBX1l1VY&list=UUTev4RNBiu6lqtx8z1e87fQ),

There’s a collider under Geneva
Reaching new energies that we’ve never achieved before
Finally we can see with this machine
A brand new data peak at 125 GeV
See how gluons and vector bosons fuse
Muons and gamma rays emerge from something new
There’s a collider under Geneva
Making one particle that we’ve never seen before

The complex scalar
Elusive boson
Escaped detection by the LEP and Tevatron
The complex scalar
What is its purpose?
It’s got me thinking

Chorus:
We could have had a model (Particle breakthrough, at the LHC)
Without a scalar field (5-sigma result, could it be the Higgs)
But symmetry requires no mass (Particle breakthrough, at the LHC)
So we break it, with the Higgs (5-sigma result, could it be the Higgs)

Baby I have a theory to be told
The standard model used to discover our quantum world
SU(3), U(1), SU(2)’s our gauge
Make a transform and the equations shouldn’t change

The particles then must all be massless
Cause mass terms vary under gauge transformation
The one solution is spontaneous
Symmetry breaking

Roll your vacuum to minimum potential
Break your SU(2) down to massless modes
Into mass terms of gauge bosons they go
Fermions sink in like skiers into snow

Lyrics and arrangement by Tim Blais and A Capella Science
Original music by Adele

In a Sept. 17, 2012 article by Ethan Yang for The McGill Daily (University of McGill, Montréal, Québec) Blais describes his background and inspiration,

How does a master’s physics student create a Higgs boson-based parody of Adele’s “Rolling in the Deep” that goes viral and gets featured in popular science magazines and blogs? We sat down with Tim Blais to learn more about the personal experiences leading to his musical and scientific project, “A Capella Science”.

McGill Daily: Could you tell us a little bit about yourself: where you’re from, your childhood, and other experiences that in hindsight you think might have led you to where you are now?
Tim Blais: I grew up in a family of five in the little town of Hudson, Quebec, twenty minutes west of the island of Montreal. My childhood was pretty full of music; I started experimenting with the piano, figuring out songs my older siblings were playing, when I was about four, and soon got actual piano lessons. My mom also ran, and continues to run, our local church choir, so from the time I was three I was singing in front of people as well. Also at about three or four a kid in my preschool introduced me to Bill Nye the Science Guy, which became the only TV I watched for about six years. After kindergarten I didn’t go to school until Grade 10, but was homeschooled by my parents. We had a very multifaceted way of learning [...] that I think allowed me to see the big picture of things without getting bogged down in the horrible little details that are often the stumbling block when you start learning something. That gave me a fascination with science that’s essentially carried me through a science DEC and one-and-a-half university degrees. But my parents have always been super cool about not pressuring us kids to be anything in particular, and now to show for it they’ve got an emerging rock star – my brother, Tom; a dedicated speech pathologist – my sister, Mary-Jane; and me, researcher in incomprehensible physics and recently popular internet fool. I think they did alright.

Since 2012, Blais has graduated with a masters in physics and is now devoted to a life as a musician (from a 2013 [?] posting on redefineschool.com),

Blais has just finished up his master’s degree program at McGill, and he says he’s putting academia aside for a while. “I’ve been in school all my life so I’m switching gears and being a musician this year!” he tweeted. And that career choice is just fine by McGill theoretical physicist Alex Maloney, Blais’ faculty adviser.

To bring us up-to-date with Blais, David has featured the latest A Capella Science music video titled: ‘Eminemium (Choose Yourself)’ in his July 16, 2014 ‘musical science’ posting on the Pasco Phronesis blog.

One last tidbit, Blais will be appearing at Calgary’s (Alberta) Beakerhead ‘festival’ (Sept. 10 – 14, 2014). Specifically, he will be at (from the TELUS Sept. 11, 2014 event page):

TELUS Spark Adults Only Night
September 11 [2014] @ 6:00 pm – 10:00 pm
[TELUS Spark Adults Only Night]

Mark your calendar for this special Beakerhead-themed adult night at TELUS Spark Science Centre. Meet the Festo Automation folks from Germany and see their mind-boggling biomechanical creatures up close. Are you also a fan of the internet sensation A Capella Science Bohemian Gravity? Meet the maker, Tim Blais, here in Calgary for Beakerhead.

This event is included with Admission and Membership. TOP TIP: Skip the queue with advance tickets. [go to TELUS event page to buy tickets]

You can find out more about A Capella Science on its Facebook page or via its Twitter feed. For more about Beakerhead events, go here.

Baba Brinkman’s ‘off the top’ neuroscience improv and other raps

Provided you live in New York City or are visiting at the right time, there’s a free Baba Brinkman and others performance (from the Off The Top: The Neuroscience of Improv Eventbrite registration page),

Off The Top: The Neuroscience of Improv
The Rockefeller University Science Outreach Program
Wednesday, July 23, 2014 from 7:00 PM to 9:00 PM (EDT)
New York, NY [emphasis mine]

Here’s a description of the performance and performers (Note: Berlin and Brinkman are a married to each other),

Neuroscientist Dr. Heather Berlin teams up with science rapper and freestyle fanatic Baba Brinkman to explore the brain basis of spontaneous creativity. Brought to you by the prefrontal cortex, and featuring special guest performers, this is a celebration of the science and stagecraft behind life’s unforgettable moments of unscripted gold.

Held in The Rockefeller University’s iconic Caspary Auditorium, this event will expertly mash up pop culture, hip hop, and neuroscience. Guests will experience an accessible conversation while being entertained by some of NYC’s own hip hop performers.

About the Performers:

Heather Berlin, PhD is an American neuroscientist focusing on brain-behavior relationships affecting the prevention and treatment of psychiatric disorders. She is also interested in the neural basis of consciousness and dynamic unconscious processes.

Baba Brinkman is a Canadian rapper, poet and playwright best known for recordings and performances that combine hip hop music with literature, theatre, and science.

More special guests to be named!

For anyone unfamiliar with Rockefeller University (this list includes me) there’s this from their About The Rockefeller University webpage (Note: A link has been removed),

The Rockefeller University is a world-renowned center for research and graduate education in the biomedical sciences, chemistry, bioinformatics and physics. The university’s 75 laboratories conduct both clinical and basic research and study a diverse range of biological and biomedical problems with the mission of improving the understanding of life for the benefit of humanity.

Founded in 1901 by John D. Rockefeller, the Rockefeller Institute for Medical Research was the country’s first institution devoted exclusively to biomedical research. The Rockefeller University Hospital was founded in 1910 as the first hospital devoted exclusively to clinical research. In the 1950s, the institute expanded its mission to include graduate education and began training new generations of scientists to become research leaders around the world. In 1965, it was renamed The Rockefeller University.

The university does have a ‘science’ Outreach webpage which features a number of initiatives for summer 2014,

Getting back to Baba Brinkman, he’s quite busy preparing a new show and getting ready to present it and two others* at the 2014 Edinburgh Fringe Festival as per his July 11, 2014 announcement,

Theatre making is quite the trial-by-fire! I’ve spent the past ten 18-hour days writing and rehearsing and recording and rewriting the script for The Rap Guide to Religion, which is set to premiere at the Edinburgh Fringe Festival starting July 30th, and I need your help to spread the word! Below you will find links to the three different shows I’m performing in at the Fringe, and I encourage (aka beg) you to click on each one and hit the link to “like” them on facebook. Or, if you know anyone coming to the Fringe, please send them a recommendation.

The Rap Guide to Religion explores the evolutionary origins of religiosity.

The Canterbury Tales Remixed, adapts Chaucer’s Tales for the modern ear and era. 

Off The Top adventures in the neuroscience of creativity and improvisation.

Also, calling all New Yorkers! There will be two preview performances of Rap Guide to Religion next week, July 15/16 [2014], at the East to Edinburgh festival, details here. This will be the first-ever staging of a brand new production, which is still very much a work in progress, so come if you want to catch a glimpse of the process rather than the product.

So to sum this up, there’s one free neuroscience rap show at Rockfeller University and  previews (cheaper tickets) of the new ‘religious rap’.  Then, Brinkman will be taking three shows (Rap Guide to Religion, The Canterbury Tales Remixed, and Off The Top) to Scotland’s  Edinburgh Fringe Festival.

* ‘shows’ removed from sentence to ensure better grammar on July 14, 2014 at 12:25 pm PDT.

Music on the web, a spider’s web, that is

I was expecting to see Markus Buehler and MIT (Massachusetts Institute of Technology) mentioned in this latest work on spiderwebs and music. Surprise! This latest research is from three universities in the UK as per a June 3, 2014 news item on ScienceDaily,

Spider silk transmits vibrations across a wide range of frequencies so that, when plucked like a guitar string, its sound carries information about prey, mates, and even the structural integrity of a web.

The discovery was made by researchers from the Universities of Oxford, Strathclyde, and Sheffield who fired bullets and lasers at spider silk to study how it vibrates. They found that, uniquely, when compared to other materials, spider silk can be tuned to a wide range of harmonics. The findings, to be reported in the journal Advanced Materials, not only reveal more about spiders but could also inspire a wide range of new technologies, such as tiny light-weight sensors.

A June 3, 2014 University of Oxford news release (also on EurekAlert), which originated the news item, explains the research and describes how it was conducted (firing bullets?),

‘Most spiders have poor eyesight and rely almost exclusively on the vibration of the silk in their web for sensory information,’ said Beth Mortimer of the Oxford Silk Group at Oxford University, who led the research. ‘The sound of silk can tell them what type of meal is entangled in their net and about the intentions and quality of a prospective mate. By plucking the silk like a guitar string and listening to the ‘echoes’ the spider can also assess the condition of its web.’

‘Most spiders have poor eyesight and rely almost exclusively on the vibration of the silk in their web for sensory information,’ said Beth Mortimer of the Oxford Silk Group at Oxford University, who led the research. ‘The sound of silk can tell them what type of meal is entangled in their net and about the intentions and quality of a prospective mate. By plucking the silk like a guitar string and listening to the ‘echoes’ the spider can also assess the condition of its web.’

This quality is used by the spider in its web by ‘tuning’ the silk: controlling and adjusting both the inherent properties of the silk, and the tensions and interconnectivities of the silk threads that make up the web. To study the sonic properties of the spider’s gossamer threads the researchers used ultra-high-speed cameras to film the threads as they responded to the impact of bullets. [emphasis mine] In addition, lasers were used to make detailed measurements of even the smallest vibration.

‘The fact that spiders can receive these nanometre vibrations with organs on each of their legs, called slit sensillae, really exemplifies the impact of our research about silk properties found in our study,’ said Dr Shira Gordon of the University of Strathclyde, an author involved in this research.

‘These findings further demonstrate the outstanding properties of many spider silks that are able to combine exceptional toughness with the ability to transfer delicate information,’ said Professor Fritz Vollrath of the Oxford Silk Group at Oxford University, an author of the paper. ‘These are traits that would be very useful in light-weight engineering and might lead to novel, built-in ‘intelligent’ sensors and actuators.’

Dr Chris Holland of the University of Sheffield, an author of the paper, said: ‘Spider silks are well known for their impressive mechanical properties, but the vibrational properties have been relatively overlooked and now we find that they are also an awesome communication tool. Yet again spiders continue to impress us in more ways than we can imagine.’

Beth Mortimer said: ‘It may even be that spiders set out to make a web that ‘sounds right’ as its sonic properties are intimately related to factors such as strength and flexibility.’

The research paper has not yet been published in Advanced Materials (I checked this morning, June 4, 2014).

However, there is this video from the researchers,

As for Markus Buehler’s work at MIT, you can find out more in my Nov. 28, 2012 posting, Producing stronger silk musically.

Older, Tom McFadden, and a chance to crowdsource a science rap video

My source for almost all things science and music (and, often, pop culture), David Bruggeman announced this in a May 29, 2014 post on his Pasco Phronesis blog (Note: A link has been removed),

Tom [McFadden] would like your help, because he wants to remake the video with contributions from the ‘crowd.’  Between now and June 30 [2014], you can submit a visual for a minimum of one line of the song.

I’ll describe more about McFadden’s work in a moment but first, here’s the video of his ‘Older’ science rap,

Here’s a little more information about this latest McFadden project, from a May 27, 2014 post on his Science with Tom [McFadden] blog,

Introducing “Older”, a parody of Drake’s “Over”, about science as a process rather than as a body of facts.

If you are a science student of any age, a teacher, a scientist, or a science lover, I want you to submit your visuals for some part of this video. (And if you’re a science teacher, this is a fun end of the year activity for your students).

Please share the song/competition with anyone who may be interested, and tweet about it using #ScienceFolder.

The contest deadline is June 30, 2014. The Grand Prize is a performance of a full science rap show by Tom McFadden. I’m unclear as to whether or not he will travel outside the US, regardless, it looks like a fun project. From McFadden’s May 27, 2014 post,

VISUALS: You have lots of creative freedom here. Your visuals can be drawings, animations, stop-motion, shots of you rapping with props, or anything you can dream up. If you’re short on time, you can even just submit a photo of you with your science folder or lab notebook.

LENGTH OF SUBMISSION: If you want to be considered for the grand prize, you need to submit at least one line of the song (for example, you could choose “Teacher talking. Tympanic membrane swayin’” and come up with a visual for that line). You are welcome to submit visuals for multiple lines, for a full verse, a chorus, or for the whole song. If you are working as a class, you can have different students in charge of different lines.

There’ are additional details in the post.

I have more information about McFadden in a March 28, 2013 posting in the context of his Brahe’s Battles Kickstarter project,

I can’t resist the science rap stories David Bruggeman has been highlighting on his Pascro Phronesis blog. In his Mar. 26, 2013 posting, David provides some scoop about Tom McFadden’s Kickstarter project, Battle Rap Histories of Epic Science (Brahe’s Battles),

After Fulbright work in New Zealand and similar efforts in other countries, McFadden is back in the San Francisco area helping middle school students develop raps for science debates.  The project is called “Battle Rap Histories of Epic Science” (BRAHE’S Battles) and if fully funded, it would support video production for battle raps on various scientific debates in five schools.

This was a successful Kickstarter project as noted in my Aug. 19, 2013 post,

Now on to Tom McFadden and his successful crowdfunding campaign Battle Rap Histories of Epic Science (Brahe’s Battles); which was featured  in my Mar. 28, 2013 posting. Now, David Bruggeman provides an update in his Aug. 16, 2013 posting on the Pasco Phronesis blog,

Tom McFadden’s Brahe’s B.A.T.T.L.E.S. project has dropped two nuggets of video goodness of late, one of which is racing through the interwebs.  A conceptual cousin of the New York City-based Science Genius project, McFadden’s project centers around scientific matters of debate, if not controversy. First one out of the chute involves the matter of Rosalind Franklin and her under-credited role in developing the model of DNA.

I really meant it when I said David Bruggeman is my source.

Good luck to all the contest entrants!

 

The human body as a musical instrument: performance at the University of British Columbia on April 10, 2014

It’s called The Bang! Festival of interactive music with performances of one kind or another scheduled throughout the day on April 10, 2014 (12 pm: MUSC 320; 1:30 PM: Grad Work; 2 pm: Research) and a finale featuring the Laptop Orchestra at 8 pm at the University of British Columbia’s (UBC) School of Music (Barnett Recital Hall on the Vancouver campus, Canada).

Here’s more about Bob Pritchard, professor of music, and the students who have put this programme together (from an April 7, 2014 UBC news release; Note: Links have been removed),

Pritchard [Bob Prichard], a professor of music at the University of British Columbia, is using technologies that capture physical movement to transform the human body into a musical instrument.

Pritchard and the music and engineering students who make up the UBC Laptop Orchestra wanted to inject more human performance in digital music after attending one too many uninspiring laptop music sets. “Live electronic music can be a bit of an oxymoron,” says Pritchard, referring to artists gazing at their laptops and a heavy reliance on backing tracks.

“Emerging tools and techniques can help electronic musicians find more creative and engaging ways to present their work. What results is a richer experience, which can create a deeper, more emotional connection with your audience.”

The Laptop Orchestra, which will perform a free public concert on April 10, is an extension of a music technology course at UBC’s School of Music. Comprised of 17 students from Arts, Science and Engineering, its members act as musicians, dancers, composers, programmers and hardware specialists. They create adventurous electroacoustic music using programmed and acoustic instruments, including harp, piano, clarinet and violin.

Despite its name, surprisingly few laptops are actually touched onstage. “That’s one of our rules,” says Pritchard, who is helping to launch UBC’s new minor degree in Applied Music Technology in September with Laptop Orchestra co-director Keith Hamel. “Avoid touching the laptop!”

Instead, students use body movements to trigger programmed synthetic instruments or modify the sound of their live instruments in real-time. They strap motion sensors to their bodies and instruments, play wearable iPhone instruments, swing Nintendo Wiis or PlayStation Moves, while Kinect video cameras from Sony Xboxes track their movements.

“Adding movement to our creative process has been awesome,” says Kiran Bhumber, a fourth-year music student and clarinet player. The program helped attract her back to Vancouver after attending a performing arts high school in Toronto. “I really wanted to do something completely different. When I heard of the Laptop Orchestra, I knew it was perfect for me. I begged Bob to let me in.”

The Laptop Orchestra has partnered itself with UBC’s Dept. of Computer and Electrical Engineering (from the news release),

The engineers come with expertise in programming and wireless systems and the musicians bring their performance and composition chops, and program code as well.

Besides creating their powerful music, the students have invented a series of interfaces and musical gadgets. The first is the app sensorUDP, which transforms musicians’ smartphones into motion sensors. Available in the Android app store and compatible with iPhones, it allows performers to layer up to eight programmable sounds and modify them by moving their phone.

Music student Pieteke MacMahon modified the app to create an iPhone Piano, which she plays on her wrist, thanks to a mount created by engineering classmates. As she moves her hands up, the piano notes go up in pitch. When she drops her hands, the sound gets lower, and a delay effect increases if her palm faces up. “Audiences love how intuitive it is,” says the composition major. “It creates music in a way that really makes sense to people, and it looks pretty cool onstage.”

Here’s a video of the iPhone Piano (aka PietekeIPhoneSensor) in action,

The members of the Laptop Orchestra have travelled to collaborate internationally (Note: Links have been removed),

Earlier this year, the ensemble’s unique music took them to Europe. The class spent 10 days this February in Belgium where they collaborated and performed in concert with researchers at the University of Mons, a leading institution for research on gesture-tracking technology.

The Laptop Orchestra’s trip was sponsored by UBC’s Go Global and Arts Research Abroad, which together send hundreds of students on international learning experiences each year.

In Belgium, the ensemble’s dancer Diana Brownie wore a body suit covered head-to-toe in motion sensors as part of a University of Mons research project on body movement. The researchers – one a former student of Pritchard’s – will use the suit’s data to help record and preserve cultural folk dances.

For anyone who needs directions, here’s a link to UBC’s Vancouver Campus Maps, Directions, & Tours webpage.

Call for papers: conference on sound art curation

It’s not exactly data sonification (my Feb. 7, 2014 posting about sound as a way to represent research data) but there’s a call for papers (deadline March 31, 2014) for a conference focused on curating sound art. Lanfranco Aceti, an academic, an artist and a curator whom I met some years ago at a conference sent me a March 20, 2014 announcement,

OCR (Operational and Curatorial Research in Art, Design, Science and Technology) is launching a series of international conferences with international partners.

Sound Art Curating is the first conference to take place in London, May 15-17, 2014 at Goldsmiths and at the Courtauld Institute of Art [both located in London, England].

The call for paper will close March 31, 2014 and it can be accessed at this link:
http://ocradst.org/blog/2014/01/25/histories-theories-and-practices-of-sound-art/

The conference website is available at this link: http://ocradst.org/soundartcurating/

I did get more information about the OCR from their About page,

Operational and Curatorial Research in Contemporary Art, Design, Science and Technology (OCR) is a research center that focuses on research in the fine arts. Its projects are characterized by elements of interdisciplinarity and transdiciplinarity. OCR engages with public and private institutions worldwide in order to foster innovation and best practices through collaborations and synergies.

OCR has two international outlets: the Media Exhibition Platform (MEP), a platform for peer reviewed exhibitions, and Contemporary Art and Culture (CAC), a peer-reviewed publishing platform for academic texts, artists’ books and catalogs.

Lanfranco Aceti is the founder and director of OCR, MEP and CAC, and has worked in the field for over twenty years.

Here’s more about what the organizers are looking for from the Call for Papers webpage,

Traditionally, the curator has been affiliated to the modern museum as the persona who manages an archive, and arranges and communicates knowledge to an audience, according to fields of expertise (art, archaeology, cultural or natural history etc.). However, in the later part of the 20th century the role of the curator changes – first on the art-scene and later in other more traditional institutions – into a more free-floating, organizational and ’constructive’ activity that allows the curator to create and design new wider relations, interpretations of knowledge modalities of communication and systems of dissemination to the wider public.

This shift is parallel to a changing role of the artist, that from producer becomes manager of its own archives, structures for displays, arrangements and recombinatory experiences that design interactive or analog journeys through sound artworks and soundscapes. Museums and galleries, following the impact of sound artworks in public spaces and media based festivals, become more receptive to aesthetic practices that deny the ‘direct visuality’ of the image and bypass, albeit partially, the need for material and tangible objects. Sound art and its related aesthetic practices re-design ways of seeing, imaging and recalling the visual in a context that is not sensory deprived but sensory alternative.

This is a call for studies into the histories, theories and practices of sound art production and sound art curating – where the creation is to be considered not solely that of a single material but of the entire sound art experience and performative elements.

We solicit and encourage submissions from practitioners and theoreticians on sound art and curating that explore and are linked to issues related to the following areas of interest:

  • Curating Interfaces for Sound + Archives
  • Methodologies of Sound Art Curating
  • Histories of Sound Art Curating
  • Theories of Sound Art Curating
  • Practices and Aesthetics of Sound Art
  • Sound in Performance
  • Sound in Relation to Visuals

Chairs: Lanfranco Aceti, Janis Jefferies, Morten Søndergaard and Julian Stallabrass

Conference Organizers: James Bulley, Jonathan Munro, Irene Noy and Ozden Sahin

The event is supported by LARM [Danish interdisciplinary radiophonic project; Note: website is mixed Danish and English language], Kasa Gallery, Goldsmiths, the Courtauld Institute of Art and Sabanci University.

With the participation and support of the Sonics research special interest group at Goldsmiths, chaired by Atau Tanaka and Julian Henriques.

The event is part of the Graduate Festival at Goldsmiths and the Graduate research projects at the Courtauld Institute of Art.

250 words abstract submissions. Please send your submissions to: [email protected]

Deadline: March 31, 2014.

Good luck!

Data sonification: listening to your data instead of visualizing it

Representing data though music is how a Jan. 31, 2014 item on the BBC news magazine describes a Voyager 1 & 2 spacecraft duet, data sonification project discussed* in a BBC Radio 4 programme,

Musician and physicist Domenico Vicinanza has described to BBC Radio 4’s Today programme the process of representing information through music, known as “sonification”. [includes a sound clip and interview with Vicinanza]

A Jan. 22, 2014 GÉANT news release describes the project in more detail,

GÉANT, the pan-European data network serving 50 million research and education users at speeds of up to 500Gbps, recently demonstrated its power by sonifying 36 years’ worth of NASA Voyager spacecraft data and converting it into a musical duet.

The project is the work of Domenico Vicinanza, Network Services Product Manager at GÉANT. As a trained musician with a PhD in Physics, he also takes the role of Arts and Humanities Manager, exploring new ways for representing data and discovery through the use of high-speed networks.

“I wanted to compose a musical piece celebrating the Voyager 1 and 2 *together*, so used the same measurements (proton counts from the cosmic ray detector over the last 37 years) from both spacecrafts, at the exactly same point of time, but at several billions of Kms of distance one from the other.

I used different groups of instruments and different sound textures to represent the two spacecrafts, synchronising the measurements taken at the same time.”

The result is an up-tempo string and piano orchestral piece.

You can hear the duet, which has been made available by the folks at GÉANT,

The news release goes on to provide technical details about the composition,

To compose the spacecraft duet, 320,000 measurements were first selected from each spacecraft, at one hour intervals. Then that data was converted into two very long melodies, each comprising 320,000 notes using different sampling frequencies, from a few KHz to 44.1 kHz.

The result of the conversion into waveform, using such a big dataset, created a wide collection of audible sounds, lasting just a few seconds (slightly more than 7 seconds at 44.1kHz) to a few hours (more than 5hours using 1024Hz as a sampling frequency).   A certain number of data points, from a few thousand to 44,100 were each “converted” into 1 second of sound.

Using the grid computing facilities at EGI, GÉANT was able to create the duet live at the NASA booth at Super Computing 2013 using its superfast network to transfer data to/from NASA.

I think this detail from the news release gives one a different perspective on the accomplishment,

Launched in 1977, both Voyager 1 and Voyager 2 are now decommissioned but still recording and sending live data to Earth. They continue to traverse different parts of the universe, billions of kilometres apart. Voyager 1 left our solar system last year.

The research is more than an amusing way to pass the time (from the news release),

While this project was created as a fun, accessible way to demonstrate the benefit of research and education networks to society, data sonification – representing data by means of sound signals – is increasingly used to accelerate scientific discovery; from epilepsy research to deep space discovery.

I was curious to learn more about how data represented by sound signals is being used to accelerate scientific discovery and sent that question and another to Dr. Vicinanza via Tamsin Henderson of DANTE and received these answers,

(1) How does “representing data by means of sound signals “increasingly accelerate scientific discovery; from epilepsy research to deep space discovery”? In a practical sense how does one do this research? For example, do you sit down and listen to a file and intuit different relationships for the data?

Vision and visual representation is intrinsically limited to three dimensions. We all know how amazing is 3D cinema, but in terms of representation of complex information, this is as far as it gets. There is no 4D or 5D. We live in three dimensions.

Sound, on the other hand, does not have any limitation of this kind. We can continue overlapping sound layers virtually without limits and still retain the capability of recognising and understanding them. Think of an orchestra or a pop band, even if the musicians are playing all together we can actually follow the single instrument line (bass, drum, lead guitar, voice, ….) Sound is then particularly precious when dealing with multi-dimensional data since audification techniques.

In technical terms, auditory perception of complex, structured information could have several advantages in temporal, amplitude, and frequency resolution when compared to visual representations and often opens up possibilities as an alternative or complement to visualisation techniques. Those advantages include the capability of the human ear to detect patterns (detecting regularities), recognise timbres and follow different strands at the same time (i.e. the capability of following different instrument lines). This would offer, in a natural way, the opportunity of rendering different, interdependent variables onto sounds in such a way that a listener could gain relevant insight into the represented information or data.

In particular in the medical context, there have been several investigations using data sonification as a support tool for classification and diagnosis, from working on sonification of medical images to converting EEG to tones, including real-time screening and feedback on EEG signals for epilepsy.

The idea is to use sound to aggregate many “information layers”, many more than any graph or picture can represent and support the physician giving a more comprehensive representation of the situation.

(2) I understand that as you age certain sounds disappear from your hearing, e.g., people over 25 years of age are not be able to hear above 15kHz. (Note: There seems to be some debate as to when these sounds disappear, after 30, after 20, etc.) Wouldn’t this pose an age restriction on the people who could access the research or have I misunderstood what you’re doing?

No, there is actually no sensible reduction in the advantages of sonification with ageing. The only precaution is not to use too high frequencies (above 15 KHz) in the sonification and this is something that can be avoided without limiting the benefits of audification.

It is always good practice not to use excessively high frequencies since they are not always very well and uniformly perceived by everyone.

Our hearing works at its best in the region of KHz (1200Hz-3800Hz)

Thank you Dr. Vicinanza and Tamsin Henderson for this insight into representing data in multiple dimensions using sound and its application in research. And, thank you, too, for sharing a beautiful piece of music.

For the curious, I found some additional information about Dr. Vicinanza and his ‘sound’ work on his Nature Network profile page,

I am a composer, network engineer and researcher. I received my MSc and PhD degrees in Physics and studied piano, percussion and composition.

I worked as a professor of Sound Synthesis, Acoustics and Computer Music (Algorithmic Composition) at Conservatory of Music of Salerno (Italy).

I currently work as a network engineer in DANTE (www.dante.net) and chair the ASTRA project (www.astraproject.org) for the reconstruction of musical instruments by means of computer models on GÉANT and EUMEDCONNECT.

I am also the co-founder and the technical coordinator of the Lost Sound Orchestra project (www.lostsoundsorchestra.org).

Interests

As a composer and researcher I was always fascinated by the richness of the information coming from the Nature. I worked on the introduction of the sonification of seismic signals (in particular coming from active volcanoes) as a scientific tool, co-working with geophysicists and volcanologists.

I also study applications of grid technologies for music and visual arts and as a composer I took part to several concerts, digital arts performances, festivals and webcast.

My other interests include (aside with music) Argentine Tango and watercolors.

Projects

ASTRA (Ancient instruments Sound/Timbre Reconstruction Application)
www.astraproject.org

The ASTRA project is a multi disciplinary project aiming at reconstructing the sound or timbre of ancient instruments (not existing anymore) using archaeological data as fragments from excavations, written descriptions, pictures.

The technique used is the physical modeling synthesis, a complex digital audio rendering technique which allows modeling the time-domain physics of the instrument.

In other words the basic idea is to recreate a model of the musical instrument and produce the sound by simulating its behavior as a mechanical system. The application would produce one or more sounds corresponding to different configurations of the instrument (i.e. the different notes).

Lost Sounds Orchestra
www.lostsoundsorchestra.org

The Lost Sound Orchestra is the ASTRA project orchestra. It is a unique orchestra made by reconstructed ancient instrument coming from the ASTRA research activities. It is the first ensemble in the world composed of only reconstructed instruments of the past. Listening to it is like jumping into the past, in a sound world completely new to our ears.

Since I haven’t had occasion to mention either GÉANT or DANTE previously, here’s more about those organizations and some acknowledgements from the news release,

About GÉANT

GÉANT is the pan-European research and education network that interconnects Europe’s National Research and Education Networks (NRENs). Together we connect over 50 million users at 10,000 institutions across Europe, supporting research in areas such as energy, the environment, space and medicine.

Operating at speeds of up to 500Gbps and reaching over 100 national networks worldwide, GÉANT remains the largest and most advanced research and education network in the world.

Co-funded by the European Commission under the EU’s 7th Research and Development Framework Programme, GÉANT is a flagship e-Infrastructure key to achieving the European Research Area – a seamless and open European space for online research – and assuring world-leading connectivity between Europe and the rest of the world in support of global research collaborations.

The network and associated services comprise the GÉANT (GN3plus) project, a collaborative effort comprising 41 project partners: 38 European NRENs, DANTE, TERENA and NORDUnet (representing the 5 Nordic countries). GÉANT is operated by DANTE on behalf of Europe’s NRENs.

About DANTE

DANTE (Delivery of Advanced Network Technology to Europe) is a non-profit organisation established in 1993 that plans, builds and operates large scale, advanced networks for research and education. On behalf of Europe’s National Research and Education Networks (NRENs), DANTE has built and operates GÉANT, a flagship e-Infrastructure key to achieving the European Research Area.

Working in cooperation with the European Commission and in close partnership with Europe’s NRENs and international networking partners, DANTE remains fundamental to the success of global research collaboration.

DANTE manages research and education (R&E) networking projects serving Europe (GÉANT), the Mediterranean (EUMEDCONNECT), Sub-Saharan Africa (AfricaConnect), Central Asia (CAREN) regions and coordinates Europe-China collaboration (ORIENTplus). DANTE also supports R&E networking organisations in Latin America (RedCLARA), Caribbean (CKLN) and Asia-Pacific (TEIN*CC). For more information, visit www.dante.net

Acknowledgements
NASA National Space Science Data Center and the John Hopkins University Voyager LEPC experiment.
Sonification credits
Mariapaola Sorrentino and Giuseppe La Rocca.

I hope one of these days I’ll have a chance to ask a data visualization expert  whether they think it’s possible to represent multiple dimensions visually and whether or not some types of data are better represented by sound.

* ‘described’ replaced by ‘discussed’ to avoid repetition, Feb. 10, 2014. (Sometimes I’m miffed by my own writing.)

Using music to align your nanofibers

It’s always nice to feature a ‘nano and music’ research story, my Nov. 6, 2013 posting being, until now, the most recent. A Jan. 8, 2014 news item on Nanowerk describes Japanese researchers’ efforts with nanofibers (Note: A link has been removed),

Humans create and perform music for a variety of purposes, such as aesthetic pleasure, healing, religion, and ceremony. Accordingly, a scientific question arises: Can molecules or molecular assemblies interact physically with the sound vibrations of music? In the journal ChemPlusChem (“Acoustic Alignment of a Supramolecular Nanofiber in Harmony with the Sound of Music”), Japanese researchers have now revealed their physical interaction. When classical music was playing, a designed supramolecular nanofiber in a solution dynamically aligned in harmony with the sound of music.

Sound is vibration of matter, having a frequency, in which certain physical interactions occur between the acoustically vibrating media and solute molecules or molecular assemblies. Music is an art form consisting of the sound and silence expressed through time, and characterized by rhythm, harmony, and melody. The question of whether music can cause any kind of molecular or macromolecular event is controversial, and the physical interaction between the molecules and the sound of music has never been reported.

The Jan. 8, 2014 Chemistry Views article, which originated the news item, provides more detail,

Scientists working at Kobe University and Kobe City College of Technology, Japan, have now developed a supramolecular nanofiber, composed of an anthracene derivative, which can dynamically align by sensing acoustic streaming flows generated by the sound of music. Time course linear dichroism (LD) spectroscopy could visualize spectroscopically the dynamic acoustic alignments of the nanofiber in the solution. The nanofiber aligns upon exposure to the audible sound wave, with frequencies up to 1000 Hz, with quick responses to the sound and silence, and amplitude and frequency changes of the sound wave. The sheared flows generated around glass-surface boundary layer and the crossing area of the downward and upward flows allow shear-induced alignments of the nanofiber.
Music is composed of the multi complex sounds and silence, which characteristically change in the course of its playtime. The team, led by A. Tsuda, uses “Symphony No. 5 in C minor, First movement: Allegro con brio” written by Beethoven, and “Symphony No. 40 in G minor, K. 550, First movement”, written by Mozart in the experiments. When the classical music was playing, the sample solution gave the characteristic LD profile of the music, where the nanofiber dynamically aligned in harmony with the sound of music.

Here’s an imagie illustrating the scientists’ work with music,

[downloaded from http://www.chemistryviews.org/details/ezine/5712621/Musical_Molecules.html]

[downloaded from http://www.chemistryviews.org/details/ezine/5712621/Musical_Molecules.html]

Here’s a link to and a citation for the paper,

Acoustic Alignment of a Supramolecular Nanofiber in Harmony with the Sound of Music by Ryosuke Miura, Yasunari Ando, Yasuhisa Hotta, Yoshiki Nagatani, Akihiko Tsuda, ChemPlusChem 2014.  DOI: 10.1002/cplu.201300400

This is an open access paper as of Jan. 8, 2014. If the above link does not work, try this .