Category Archives: Music

The human body as a musical instrument: performance at the University of British Columbia on April 10, 2014

It’s called The Bang! Festival of interactive music with performances of one kind or another scheduled throughout the day on April 10, 2014 (12 pm: MUSC 320; 1:30 PM: Grad Work; 2 pm: Research) and a finale featuring the Laptop Orchestra at 8 pm at the University of British Columbia’s (UBC) School of Music (Barnett Recital Hall on the Vancouver campus, Canada).

Here’s more about Bob Pritchard, professor of music, and the students who have put this programme together (from an April 7, 2014 UBC news release; Note: Links have been removed),

Pritchard [Bob Prichard], a professor of music at the University of British Columbia, is using technologies that capture physical movement to transform the human body into a musical instrument.

Pritchard and the music and engineering students who make up the UBC Laptop Orchestra wanted to inject more human performance in digital music after attending one too many uninspiring laptop music sets. “Live electronic music can be a bit of an oxymoron,” says Pritchard, referring to artists gazing at their laptops and a heavy reliance on backing tracks.

“Emerging tools and techniques can help electronic musicians find more creative and engaging ways to present their work. What results is a richer experience, which can create a deeper, more emotional connection with your audience.”

The Laptop Orchestra, which will perform a free public concert on April 10, is an extension of a music technology course at UBC’s School of Music. Comprised of 17 students from Arts, Science and Engineering, its members act as musicians, dancers, composers, programmers and hardware specialists. They create adventurous electroacoustic music using programmed and acoustic instruments, including harp, piano, clarinet and violin.

Despite its name, surprisingly few laptops are actually touched onstage. “That’s one of our rules,” says Pritchard, who is helping to launch UBC’s new minor degree in Applied Music Technology in September with Laptop Orchestra co-director Keith Hamel. “Avoid touching the laptop!”

Instead, students use body movements to trigger programmed synthetic instruments or modify the sound of their live instruments in real-time. They strap motion sensors to their bodies and instruments, play wearable iPhone instruments, swing Nintendo Wiis or PlayStation Moves, while Kinect video cameras from Sony Xboxes track their movements.

“Adding movement to our creative process has been awesome,” says Kiran Bhumber, a fourth-year music student and clarinet player. The program helped attract her back to Vancouver after attending a performing arts high school in Toronto. “I really wanted to do something completely different. When I heard of the Laptop Orchestra, I knew it was perfect for me. I begged Bob to let me in.”

The Laptop Orchestra has partnered itself with UBC’s Dept. of Computer and Electrical Engineering (from the news release),

The engineers come with expertise in programming and wireless systems and the musicians bring their performance and composition chops, and program code as well.

Besides creating their powerful music, the students have invented a series of interfaces and musical gadgets. The first is the app sensorUDP, which transforms musicians’ smartphones into motion sensors. Available in the Android app store and compatible with iPhones, it allows performers to layer up to eight programmable sounds and modify them by moving their phone.

Music student Pieteke MacMahon modified the app to create an iPhone Piano, which she plays on her wrist, thanks to a mount created by engineering classmates. As she moves her hands up, the piano notes go up in pitch. When she drops her hands, the sound gets lower, and a delay effect increases if her palm faces up. “Audiences love how intuitive it is,” says the composition major. “It creates music in a way that really makes sense to people, and it looks pretty cool onstage.”

Here’s a video of the iPhone Piano (aka PietekeIPhoneSensor) in action,

The members of the Laptop Orchestra have travelled to collaborate internationally (Note: Links have been removed),

Earlier this year, the ensemble’s unique music took them to Europe. The class spent 10 days this February in Belgium where they collaborated and performed in concert with researchers at the University of Mons, a leading institution for research on gesture-tracking technology.

The Laptop Orchestra’s trip was sponsored by UBC’s Go Global and Arts Research Abroad, which together send hundreds of students on international learning experiences each year.

In Belgium, the ensemble’s dancer Diana Brownie wore a body suit covered head-to-toe in motion sensors as part of a University of Mons research project on body movement. The researchers – one a former student of Pritchard’s – will use the suit’s data to help record and preserve cultural folk dances.

For anyone who needs directions, here’s a link to UBC’s Vancouver Campus Maps, Directions, & Tours webpage.

Call for papers: conference on sound art curation

It’s not exactly data sonification (my Feb. 7, 2014 posting about sound as a way to represent research data) but there’s a call for papers (deadline March 31, 2014) for a conference focused on curating sound art. Lanfranco Aceti, an academic, an artist and a curator whom I met some years ago at a conference sent me a March 20, 2014 announcement,

OCR (Operational and Curatorial Research in Art, Design, Science and Technology) is launching a series of international conferences with international partners.

Sound Art Curating is the first conference to take place in London, May 15-17, 2014 at Goldsmiths and at the Courtauld Institute of Art [both located in London, England].

The call for paper will close March 31, 2014 and it can be accessed at this link:
http://ocradst.org/blog/2014/01/25/histories-theories-and-practices-of-sound-art/

The conference website is available at this link: http://ocradst.org/soundartcurating/

I did get more information about the OCR from their About page,

Operational and Curatorial Research in Contemporary Art, Design, Science and Technology (OCR) is a research center that focuses on research in the fine arts. Its projects are characterized by elements of interdisciplinarity and transdiciplinarity. OCR engages with public and private institutions worldwide in order to foster innovation and best practices through collaborations and synergies.

OCR has two international outlets: the Media Exhibition Platform (MEP), a platform for peer reviewed exhibitions, and Contemporary Art and Culture (CAC), a peer-reviewed publishing platform for academic texts, artists’ books and catalogs.

Lanfranco Aceti is the founder and director of OCR, MEP and CAC, and has worked in the field for over twenty years.

Here’s more about what the organizers are looking for from the Call for Papers webpage,

Traditionally, the curator has been affiliated to the modern museum as the persona who manages an archive, and arranges and communicates knowledge to an audience, according to fields of expertise (art, archaeology, cultural or natural history etc.). However, in the later part of the 20th century the role of the curator changes – first on the art-scene and later in other more traditional institutions – into a more free-floating, organizational and ’constructive’ activity that allows the curator to create and design new wider relations, interpretations of knowledge modalities of communication and systems of dissemination to the wider public.

This shift is parallel to a changing role of the artist, that from producer becomes manager of its own archives, structures for displays, arrangements and recombinatory experiences that design interactive or analog journeys through sound artworks and soundscapes. Museums and galleries, following the impact of sound artworks in public spaces and media based festivals, become more receptive to aesthetic practices that deny the ‘direct visuality’ of the image and bypass, albeit partially, the need for material and tangible objects. Sound art and its related aesthetic practices re-design ways of seeing, imaging and recalling the visual in a context that is not sensory deprived but sensory alternative.

This is a call for studies into the histories, theories and practices of sound art production and sound art curating – where the creation is to be considered not solely that of a single material but of the entire sound art experience and performative elements.

We solicit and encourage submissions from practitioners and theoreticians on sound art and curating that explore and are linked to issues related to the following areas of interest:

  • Curating Interfaces for Sound + Archives
  • Methodologies of Sound Art Curating
  • Histories of Sound Art Curating
  • Theories of Sound Art Curating
  • Practices and Aesthetics of Sound Art
  • Sound in Performance
  • Sound in Relation to Visuals

Chairs: Lanfranco Aceti, Janis Jefferies, Morten Søndergaard and Julian Stallabrass

Conference Organizers: James Bulley, Jonathan Munro, Irene Noy and Ozden Sahin

The event is supported by LARM [Danish interdisciplinary radiophonic project; Note: website is mixed Danish and English language], Kasa Gallery, Goldsmiths, the Courtauld Institute of Art and Sabanci University.

With the participation and support of the Sonics research special interest group at Goldsmiths, chaired by Atau Tanaka and Julian Henriques.

The event is part of the Graduate Festival at Goldsmiths and the Graduate research projects at the Courtauld Institute of Art.

250 words abstract submissions. Please send your submissions to: [email protected]

Deadline: March 31, 2014.

Good luck!

Data sonification: listening to your data instead of visualizing it

Representing data though music is how a Jan. 31, 2014 item on the BBC news magazine describes a Voyager 1 & 2 spacecraft duet, data sonification project discussed* in a BBC Radio 4 programme,

Musician and physicist Domenico Vicinanza has described to BBC Radio 4′s Today programme the process of representing information through music, known as “sonification”. [includes a sound clip and interview with Vicinanza]

A Jan. 22, 2014 GÉANT news release describes the project in more detail,

GÉANT, the pan-European data network serving 50 million research and education users at speeds of up to 500Gbps, recently demonstrated its power by sonifying 36 years’ worth of NASA Voyager spacecraft data and converting it into a musical duet.

The project is the work of Domenico Vicinanza, Network Services Product Manager at GÉANT. As a trained musician with a PhD in Physics, he also takes the role of Arts and Humanities Manager, exploring new ways for representing data and discovery through the use of high-speed networks.

“I wanted to compose a musical piece celebrating the Voyager 1 and 2 *together*, so used the same measurements (proton counts from the cosmic ray detector over the last 37 years) from both spacecrafts, at the exactly same point of time, but at several billions of Kms of distance one from the other.

I used different groups of instruments and different sound textures to represent the two spacecrafts, synchronising the measurements taken at the same time.”

The result is an up-tempo string and piano orchestral piece.

You can hear the duet, which has been made available by the folks at GÉANT,

The news release goes on to provide technical details about the composition,

To compose the spacecraft duet, 320,000 measurements were first selected from each spacecraft, at one hour intervals. Then that data was converted into two very long melodies, each comprising 320,000 notes using different sampling frequencies, from a few KHz to 44.1 kHz.

The result of the conversion into waveform, using such a big dataset, created a wide collection of audible sounds, lasting just a few seconds (slightly more than 7 seconds at 44.1kHz) to a few hours (more than 5hours using 1024Hz as a sampling frequency).   A certain number of data points, from a few thousand to 44,100 were each “converted” into 1 second of sound.

Using the grid computing facilities at EGI, GÉANT was able to create the duet live at the NASA booth at Super Computing 2013 using its superfast network to transfer data to/from NASA.

I think this detail from the news release gives one a different perspective on the accomplishment,

Launched in 1977, both Voyager 1 and Voyager 2 are now decommissioned but still recording and sending live data to Earth. They continue to traverse different parts of the universe, billions of kilometres apart. Voyager 1 left our solar system last year.

The research is more than an amusing way to pass the time (from the news release),

While this project was created as a fun, accessible way to demonstrate the benefit of research and education networks to society, data sonification – representing data by means of sound signals – is increasingly used to accelerate scientific discovery; from epilepsy research to deep space discovery.

I was curious to learn more about how data represented by sound signals is being used to accelerate scientific discovery and sent that question and another to Dr. Vicinanza via Tamsin Henderson of DANTE and received these answers,

(1) How does “representing data by means of sound signals “increasingly accelerate scientific discovery; from epilepsy research to deep space discovery”? In a practical sense how does one do this research? For example, do you sit down and listen to a file and intuit different relationships for the data?

Vision and visual representation is intrinsically limited to three dimensions. We all know how amazing is 3D cinema, but in terms of representation of complex information, this is as far as it gets. There is no 4D or 5D. We live in three dimensions.

Sound, on the other hand, does not have any limitation of this kind. We can continue overlapping sound layers virtually without limits and still retain the capability of recognising and understanding them. Think of an orchestra or a pop band, even if the musicians are playing all together we can actually follow the single instrument line (bass, drum, lead guitar, voice, ….) Sound is then particularly precious when dealing with multi-dimensional data since audification techniques.

In technical terms, auditory perception of complex, structured information could have several advantages in temporal, amplitude, and frequency resolution when compared to visual representations and often opens up possibilities as an alternative or complement to visualisation techniques. Those advantages include the capability of the human ear to detect patterns (detecting regularities), recognise timbres and follow different strands at the same time (i.e. the capability of following different instrument lines). This would offer, in a natural way, the opportunity of rendering different, interdependent variables onto sounds in such a way that a listener could gain relevant insight into the represented information or data.

In particular in the medical context, there have been several investigations using data sonification as a support tool for classification and diagnosis, from working on sonification of medical images to converting EEG to tones, including real-time screening and feedback on EEG signals for epilepsy.

The idea is to use sound to aggregate many “information layers”, many more than any graph or picture can represent and support the physician giving a more comprehensive representation of the situation.

(2) I understand that as you age certain sounds disappear from your hearing, e.g., people over 25 years of age are not be able to hear above 15kHz. (Note: There seems to be some debate as to when these sounds disappear, after 30, after 20, etc.) Wouldn’t this pose an age restriction on the people who could access the research or have I misunderstood what you’re doing?

No, there is actually no sensible reduction in the advantages of sonification with ageing. The only precaution is not to use too high frequencies (above 15 KHz) in the sonification and this is something that can be avoided without limiting the benefits of audification.

It is always good practice not to use excessively high frequencies since they are not always very well and uniformly perceived by everyone.

Our hearing works at its best in the region of KHz (1200Hz-3800Hz)

Thank you Dr. Vicinanza and Tamsin Henderson for this insight into representing data in multiple dimensions using sound and its application in research. And, thank you, too, for sharing a beautiful piece of music.

For the curious, I found some additional information about Dr. Vicinanza and his ‘sound’ work on his Nature Network profile page,

I am a composer, network engineer and researcher. I received my MSc and PhD degrees in Physics and studied piano, percussion and composition.

I worked as a professor of Sound Synthesis, Acoustics and Computer Music (Algorithmic Composition) at Conservatory of Music of Salerno (Italy).

I currently work as a network engineer in DANTE (www.dante.net) and chair the ASTRA project (www.astraproject.org) for the reconstruction of musical instruments by means of computer models on GÉANT and EUMEDCONNECT.

I am also the co-founder and the technical coordinator of the Lost Sound Orchestra project (www.lostsoundsorchestra.org).

Interests

As a composer and researcher I was always fascinated by the richness of the information coming from the Nature. I worked on the introduction of the sonification of seismic signals (in particular coming from active volcanoes) as a scientific tool, co-working with geophysicists and volcanologists.

I also study applications of grid technologies for music and visual arts and as a composer I took part to several concerts, digital arts performances, festivals and webcast.

My other interests include (aside with music) Argentine Tango and watercolors.

Projects

ASTRA (Ancient instruments Sound/Timbre Reconstruction Application)
www.astraproject.org

The ASTRA project is a multi disciplinary project aiming at reconstructing the sound or timbre of ancient instruments (not existing anymore) using archaeological data as fragments from excavations, written descriptions, pictures.

The technique used is the physical modeling synthesis, a complex digital audio rendering technique which allows modeling the time-domain physics of the instrument.

In other words the basic idea is to recreate a model of the musical instrument and produce the sound by simulating its behavior as a mechanical system. The application would produce one or more sounds corresponding to different configurations of the instrument (i.e. the different notes).

Lost Sounds Orchestra
www.lostsoundsorchestra.org

The Lost Sound Orchestra is the ASTRA project orchestra. It is a unique orchestra made by reconstructed ancient instrument coming from the ASTRA research activities. It is the first ensemble in the world composed of only reconstructed instruments of the past. Listening to it is like jumping into the past, in a sound world completely new to our ears.

Since I haven’t had occasion to mention either GÉANT or DANTE previously, here’s more about those organizations and some acknowledgements from the news release,

About GÉANT

GÉANT is the pan-European research and education network that interconnects Europe’s National Research and Education Networks (NRENs). Together we connect over 50 million users at 10,000 institutions across Europe, supporting research in areas such as energy, the environment, space and medicine.

Operating at speeds of up to 500Gbps and reaching over 100 national networks worldwide, GÉANT remains the largest and most advanced research and education network in the world.

Co-funded by the European Commission under the EU’s 7th Research and Development Framework Programme, GÉANT is a flagship e-Infrastructure key to achieving the European Research Area – a seamless and open European space for online research – and assuring world-leading connectivity between Europe and the rest of the world in support of global research collaborations.

The network and associated services comprise the GÉANT (GN3plus) project, a collaborative effort comprising 41 project partners: 38 European NRENs, DANTE, TERENA and NORDUnet (representing the 5 Nordic countries). GÉANT is operated by DANTE on behalf of Europe’s NRENs.

About DANTE

DANTE (Delivery of Advanced Network Technology to Europe) is a non-profit organisation established in 1993 that plans, builds and operates large scale, advanced networks for research and education. On behalf of Europe’s National Research and Education Networks (NRENs), DANTE has built and operates GÉANT, a flagship e-Infrastructure key to achieving the European Research Area.

Working in cooperation with the European Commission and in close partnership with Europe’s NRENs and international networking partners, DANTE remains fundamental to the success of global research collaboration.

DANTE manages research and education (R&E) networking projects serving Europe (GÉANT), the Mediterranean (EUMEDCONNECT), Sub-Saharan Africa (AfricaConnect), Central Asia (CAREN) regions and coordinates Europe-China collaboration (ORIENTplus). DANTE also supports R&E networking organisations in Latin America (RedCLARA), Caribbean (CKLN) and Asia-Pacific (TEIN*CC). For more information, visit www.dante.net

Acknowledgements
NASA National Space Science Data Center and the John Hopkins University Voyager LEPC experiment.
Sonification credits
Mariapaola Sorrentino and Giuseppe La Rocca.

I hope one of these days I’ll have a chance to ask a data visualization expert  whether they think it’s possible to represent multiple dimensions visually and whether or not some types of data are better represented by sound.

* ‘described’ replaced by ‘discussed’ to avoid repetition, Feb. 10, 2014. (Sometimes I’m miffed by my own writing.)

Using music to align your nanofibers

It’s always nice to feature a ‘nano and music’ research story, my Nov. 6, 2013 posting being, until now, the most recent. A Jan. 8, 2014 news item on Nanowerk describes Japanese researchers’ efforts with nanofibers (Note: A link has been removed),

Humans create and perform music for a variety of purposes, such as aesthetic pleasure, healing, religion, and ceremony. Accordingly, a scientific question arises: Can molecules or molecular assemblies interact physically with the sound vibrations of music? In the journal ChemPlusChem (“Acoustic Alignment of a Supramolecular Nanofiber in Harmony with the Sound of Music”), Japanese researchers have now revealed their physical interaction. When classical music was playing, a designed supramolecular nanofiber in a solution dynamically aligned in harmony with the sound of music.

Sound is vibration of matter, having a frequency, in which certain physical interactions occur between the acoustically vibrating media and solute molecules or molecular assemblies. Music is an art form consisting of the sound and silence expressed through time, and characterized by rhythm, harmony, and melody. The question of whether music can cause any kind of molecular or macromolecular event is controversial, and the physical interaction between the molecules and the sound of music has never been reported.

The Jan. 8, 2014 Chemistry Views article, which originated the news item, provides more detail,

Scientists working at Kobe University and Kobe City College of Technology, Japan, have now developed a supramolecular nanofiber, composed of an anthracene derivative, which can dynamically align by sensing acoustic streaming flows generated by the sound of music. Time course linear dichroism (LD) spectroscopy could visualize spectroscopically the dynamic acoustic alignments of the nanofiber in the solution. The nanofiber aligns upon exposure to the audible sound wave, with frequencies up to 1000 Hz, with quick responses to the sound and silence, and amplitude and frequency changes of the sound wave. The sheared flows generated around glass-surface boundary layer and the crossing area of the downward and upward flows allow shear-induced alignments of the nanofiber.
Music is composed of the multi complex sounds and silence, which characteristically change in the course of its playtime. The team, led by A. Tsuda, uses “Symphony No. 5 in C minor, First movement: Allegro con brio” written by Beethoven, and “Symphony No. 40 in G minor, K. 550, First movement”, written by Mozart in the experiments. When the classical music was playing, the sample solution gave the characteristic LD profile of the music, where the nanofiber dynamically aligned in harmony with the sound of music.

Here’s an imagie illustrating the scientists’ work with music,

[downloaded from http://www.chemistryviews.org/details/ezine/5712621/Musical_Molecules.html]

[downloaded from http://www.chemistryviews.org/details/ezine/5712621/Musical_Molecules.html]

Here’s a link to and a citation for the paper,

Acoustic Alignment of a Supramolecular Nanofiber in Harmony with the Sound of Music by Ryosuke Miura, Yasunari Ando, Yasuhisa Hotta, Yoshiki Nagatani, Akihiko Tsuda, ChemPlusChem 2014.  DOI: 10.1002/cplu.201300400

This is an open access paper as of Jan. 8, 2014. If the above link does not work, try this .

Pop and rock music lead to better solar cells

A Nov. 6, 2013 news item on Nanowerk reveals that scientists at the Imperial College of London (UK) and Queen Mary University of London (UK),

Playing pop and rock music improves the performance of solar cells, according to new research from scientists at Queen Mary University of London and Imperial College London.

The high frequencies and pitch found in pop and rock music cause vibrations that enhanced energy generation in solar cells containing a cluster of ‘nanorods’, leading to a 40 per cent increase in efficiency of the solar cells.

The study has implications for improving energy generation from sunlight, particularly for the development of new, lower cost, printed solar cells.

The Nov. 6, 2013 Imperial College of London (ICL) news release, which originated the news item, gives more details about the research,

The researchers grew billions of tiny rods (nanorods) made from zinc oxide, then covered them with an active polymer to form a device that converts sunlight into electricity.

Using the special properties of the zinc oxide material, the team was able to show that sound levels as low as 75 decibels (equivalent to a typical roadside noise or a printer in an office) could significantly improve the solar cell performance.

“After investigating systems for converting vibrations into electricity this is a really exciting development that shows a similar set of physical properties can also enhance the performance of a photovoltaic,” said Dr Steve Dunn, Reader in Nanoscale Materials from Queen Mary’s School of Engineering and Materials Science.

Scientists had previously shown that applying pressure or strain to zinc oxide materials could result in voltage outputs, known as the piezoelectric effect. However, the effect of these piezoelectric voltages on solar cell efficiency had not received significant attention before.

“We thought the soundwaves, which produce random fluctuations, would cancel each other out and so didn’t expect to see any significant overall effect on the power output,” said James Durrant, Professor of Photochemistry at Imperial College London, who co-led the study.

“The key for us was that not only that the random fluctuations from the sound didn’t cancel each other out, but also that some frequencies of sound seemed really to amplify the solar cell output – so that the increase in power was a remarkably big effect considering how little sound energy we put in.”

“We tried playing music instead of dull flat sounds, as this helped us explore the effect of different pitches. The biggest difference we found was when we played pop music rather than classical, which we now realise is because our acoustic solar cells respond best to the higher pitched sounds present in pop music,” he concluded.

The discovery could be used to power devices that are exposed to acoustic vibrations, such as air conditioning units or within cars and other vehicles.

This is not the first time that music has been shown to affect properties at the nanoscale. A March 12, 2008 article by Anna Salleh for the Australian Broadcasting Corporation featured a researcher who tested nanowire growth by playing music (Note: Links have been removed),

Silicon nanowires grow more densely when blasted with Deep Purple than any other music tested, says an Australian researcher.

But the exact potential of music in growing nanowires remains a little hazy.

David Parlevliet, a PhD student at Murdoch University in Perth, presented his findings at a recent Australian Research Council Nanotechnology Network symposium in Melbourne.

Parlevliet is testing nanowires for their ability to absorb sunlight in the hope of developing solar cells from them.

I’ve taken a look at the references cited by researchers in their paper and there is nothing from Parleviet listed, so, this seems to be one of those cases where more than one scientist is thinking along the similar lines, i.e., that sound might affect nanoscale structures in such a way as to improve solar cell efficiency.

Here’s a link to and a citation for the ICL/University of Queen Mary research paper,

Acoustic Enhancement of Polymer/ZnO Nanorod Photovoltaic Device Performance by Safa Shoaee, Joe Briscoe, James R. Durrant, Steve Dunn. Article first published online: 6 NOV 2013 DOI: 10.1002/adma.201303304
© 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Hip hop infused science rap theatrical experience travels from off Broadway (New York) to Vancouver (Canada)

Baba Brinkman is now back in town to perform the latest version of his Rap Guide to Evolution at Vancouver’s East Cultural Centre, (The Cultch) from Oct. 29, – November 10, 2013, There’s a special deal from now (Oct. 14, 2013 to midnight Oct. 18, 2013) where The Cultch is offering a 50% discount off tickets for the first five days of performances,

OFFER BEGINS: October 11 at 10 am
OFFER ENDS: October 18 at midnight

Canadian actor and rapper Baba Brinkman returns to his home town of Vancouver, BC to perform his unabridged production of The Rap Guide to Evolution from October 29 to November 10! A smash hit at the Edinburgh Fringe, in New York, and around the world, The Rap Guide is at once provocative, hilarious, intelligent, and scientifically accurate. Get a sneak-peek of the production and find out more here.

The Rap Guide to Evolution has earned Baba accolades from The New York Times and landed him spots on Rachel Maddow’s MSNBC show  and TEDxEast. Don’t miss out on the production everyone is talking about!

SAVE 50% with promo code EVO50 online or by phone through The Cultch Box Office at 604-251-1363. Valid for performances Oct 29-31 and Nov 1, 2. Valid for A+B seating only. Act now – offer expires Oct 18!

PLUS! Join us for Halloween fun! Come dressed up on Oct 29, 30, 31 and you’ll receive candy and the chance to win prizes for best costume!!

When purchasing online simply enter the code into the “Discount Coupon” field at the checkout page. Cannot be combined with any other offer. No cash value. Offer may not be applied on past purchases.

Tickets are priced at $17.14, $29.52 and $38.10, respectively.. As best I can tell, the prices don’t include tax, are the same for both evening and matinee shows, as for the cheap seats I can’t tell if they are available for all performances (I found the online ticketing function a little confusing).

The show does ‘evolve’ so I’m not positive (although I’m pretty sure) this particular piece will be performed. Here it is anyway, just because I find it provocative and because it gives you some idea of his approach and music,,

I’ve mentioned Baba and his Rap Guide to Evolution several times including this Feb. 17, 2011 posting which featured an interview with him prior to a Vancouver performance of his Rap Guide and I then offered a commentary on  that performance in a Feb. 21, 2011 posting. I hope to see what he’s done with the Rap Guide since adding a DJ and redeveloping the piece for theatrical purposes although I have to admit to a certain fondness for the ambience of that 2011 performance at Vancouver’s Railway Club.

ETA Oct. 14, 2013 4 pm PDT: You can find Baba Brinkman’s website here.

Cyborgian dance at McGill University (Canada)

As noted in the Canadian Council of Academies report ((State of Science and Technology in Canada, 2012), which was mentioned in my Dec. 28, 2012 posting, the field of visual and performing arts is an area of strength and that is due to one province, Québec. Mark Wilson’s Aug. 13, 2013 article for Fast Company and Paul Ridden’s Aug. 7, 2013 article for gizmag.com about McGill University’s Instrumented Bodies: Digital Prostheses for Music and Dance Performance seem to confirm Québec’s leadership.

From Wilson’s Aug. 13, 2013 article (Note: A link has been removed),

One is a glowing exoskeleton spine, while another looks like a pair of cyborg butterfly wings. But these aren’t just costumes; they’re wearable, functional art.

In fact, the team of researchers from the IDML (Input Devices and Music Interaction Laboratory [at McGill University]) who are responsible for the designs go so far as to call their creations “prosthetic instruments.”

Ridden’s Aug. 7, 2013 article offers more about the project’s history and technology,

For the last three years, a small research team at McGill University has been working with a choreographer, a composer, dancers and musicians on a project named Instrumented Bodies. Three groups of sensor-packed, internally-lit digital music controllers that attach to a dancer’s costume have been developed, each capable of wirelessly triggering synthesized music as the performer moves around the stage. Sounds are produced by tapping or stroking transparent Ribs or Visors, or by twisting, turning or moving Spines. Though work on the project continues, the instruments have already been used in a performance piece called Les Gestes which toured Canada and Europe during March and April.

Both articles are interesting but Wilson’s is the fast read and Ridden’s gives you information you can’t find by looking up the Instrumented Bodies: Digital Prostheses for Music and Dance Performance project webpage,

These instruments are the culmination of a three-year long project in which the designers worked closely with dancers, musicians, composers and a choreographer. The goal of the project was to develop instruments that are visually striking, utilize advanced sensing technologies, and are rugged enough for extensive use in performance.

The complex, transparent shapes are lit from within, and include articulated spines, curved visors and ribcages. Unlike most computer music control interfaces, they function both as hand-held, manipulable controllers and as wearable, movement-tracking extensions to the body. Further, since the performers can smoothly attach and detach the objects, these new instruments deliberately blur the line between the performers’ bodies and the instrument being played.

The prosthetic instruments were designed and developed by Ph.D. researchers Joseph Malloch and Ian Hattwick [and Marlon Schumacher] under the supervision of IDMIL director Marcelo Wanderley. Starting with sketches and rough foam prototypes for exploring shape and movement, they progressed through many iterations of the design before arriving at the current versions. The researchers made heavy use of digital fabrication technologies such as laser-cutters and 3D printers, which they accessed through the McGill University School of Architecture and the Centre for Interdisciplinary Research in Music Media and Technology, also hosted by McGill.

Each of the nearly thirty working instruments produced for the project has embedded sensors, power supplies and wireless data transceivers, allowing a performer to control the parameters of music synthesis and processing in real time through touch, movement, and orientation. The signals produced by the instruments are routed through an open-source peer-to-peer software system the IDMIL team has developed for designing the connections between sensor signals and sound synthesis parameters.

For those who prefer to listen and watch, the researchers have created a video documentary,

I usually don’t include videos that run past 5 mins. but I’ve made an exception for this almost 15 mins. documentary.

I was trying to find mention of a dancer and/or choreographer associated with this project and found a name along with another early stage participant, choreographer, Isabelle Van Grimde, and composer, Sean Ferguson, in Ridden’s article.

WHALE of a concert on the edge of Hudson Bay (northern Canada) and sounds of icebergs from Oregon State University

Both charming and confusing (to me), the WHALE project features two artists (or is it musicians?) singing to and with beluga whales using a homemade underwater sound system while they all float on or in Hudson Bay. There’s a July 10, 2013 news item about the project on the CBC (Canadian Broadcasting Corporation) news website,

What began as an interest in aquatic culture for Laura Magnusson and Kaoru Ryan Klatt has turned into a multi-year experimental project that brings art to the marine mammals.

Since 2011, Magnusson and Klatt have been taking a boat onto the Churchill River, which flows into Hudson Bay, with a home-made underwater sound system.

….

Last week, the pair began a 75-day expedition that involves travelling aboard a special “sculptural sea vessel” to “build a sustained but non-invasive presence to foster bonds between humans and whales,” according to the project’s website.

Ten other musicians and interdisciplinary artists are joining Klatt and Magnusson to perform new works they’ve created specifically for the whales.

The latest expedition will be the focus of Becoming Beluga, a feature film that Klatt is directing.

Magnusson and Klatt are also testing a high-tech “bionic whale suit” that would enable the wearer to swim and communicate like a beluga whale.

Klatt has produced a number of WHALE videos including this one (Note: This not a slick production nor were any of the others I viewed on YouTube),

In addition to not being slick, there’s a quirky quality to this project video that I find charming and interesting.

My curiosity aroused, I also visited Magnusson’s and Klatt’s WHALE website and found this project description,

WHALE is an interdisciplinary art group comprised of Winnipeg-based artists Kaoru Ryan Klatt and Laura Magnusson. Their vision is to expand art and culture beyond human boundaries to non-human beings. Since 2011, they have been traveling to the northern edge of Manitoba, Canada to forge connections with thousands of beluga whales. From a canoe on the Churchill River, they have collaborated with these whales through sound, movement, and performative action. Now, aboard the SSV Cetus – a specially crafted sculptural sea vessel – they will embark on a 75-day art expedition throughout the Churchill River estuary, working to build a sustained but non-invasive presence to foster bonds between humans and whales. This undertaking – Becoming Beluga – is the culmination of a three-year integrated arts project with the belugas of this region, taking place between July 2 and September 14, 2013.

While the word ‘artist’ suggests visual arts rather than musical arts what I find a little more confounding is that this is not being described an art/science or art/technology project as these artists are clearly developing technology with their underwater sound system, sculptural sea vessel, and bionic whale suit. In any event, I wish them good luck with WHALE and their Becoming Beluga film.

In a somewhat related matter and for those interested in soundscapes and the ocean (in Antarctica), there is some research from Oregon State University which claims that melting icebergs make a huge din. From a July 11, 2013 news item on phys.org,

There is growing concern about how much noise humans generate in marine environments through shipping, oil exploration and other developments, but a new study has found that naturally occurring phenomena could potentially affect some ocean dwellers.

Nowhere is this concern greater than in the polar regions, where the effects of global warming often first manifest themselves. The breakup of ice sheets and the calving and grounding of icebergs can create enormous sound energy, scientists say. Now a new study has found that the mere drifting of an iceberg from near Antarctica to warmer ocean waters produces startling levels of noise.

The Oregon State University July 10, 2013 news release, which originated the news item, provides more detail (Note: A link has been removed),

A team led by Oregon State University (OSU) researchers used an array of hydrophones to track the sound produced by an iceberg through its life cycle, from its origin in the Weddell Sea to its eventual demise in the open ocean. The goal of the project was to measure baseline levels of this kind of naturally occurring sound in the ocean, so it can be compared to anthropogenic noises.

“During one hour-long period, we documented that the sound energy released by the iceberg disintegrating was equivalent to the sound that would be created by a few hundred supertankers over the same period,” said Robert Dziak, a marine geologist at OSU’s Hatfield Marine Science Center in Newport, Ore., and lead author on the study. [emphasis mine]

“This wasn’t from the iceberg scraping the bottom,” he added. “It was from its rapid disintegration as the berg melted and broke apart. We call the sounds ‘icequakes’ because the process and ensuing sounds are much like those produced by earthquakes.”

I encourage anyone who’s interested to read the entire news release (apparently the researchers were getting images of their iceberg from the International Space Station) and/or the team’s published research paper,

Robert P. Dziak, Matthew J. Fowler, Haruyoshi Matsumoto, DelWayne R. Bohnenstiehl, Minkyu Park, Kyle Warren, and Won Sang Lee. 2013. Life and death sounds of Iceberg A53a. Oceanography 26(2), http://dx.doi.org/10.5670/oceanog.2013.20.

Baba Brinkman’s hip-hop theatre cycle features Chaucer & a live onstage science peer-review in New York City

I’m happy to see that Baba Brinkman’s IndieGogo crowdfunding campaign was successful (my Jan. 25, 2013 posting), so he can introduce the first hip-hop theatre cycle (Evolutionary Tales) to the world. He will be performing three of his shows, Ingenious Nature, Rap Guide to Evolution, and Canterbury Tales Remixed in repertory off Broadway in New York City on weekends (Fri. – Sun.) starting Friday, May 31, 2013 and then, throughout most of the month of June. Here’s more from Baba’s May 29, 2013 announcement,

Greetings from the Player’s Theatre! We’ve spent the past two days setting up not one but three shows here, preparing for the world’s first-ever hip-hop theatre cycle: Evolutionary Tales.

We launch on Friday with a performance of Ingenious Nature, which was recently nominated for an Off-Broadway Alliance Award in the category of “Best Unique Theatrical Experience” (which we didn’t win, but it was a nice acknowledgement). Use the code “Genious” to get $29 advance tickets.

Then on Saturday we’re taking the “peer-reviewed rap” theme to the next level, with a World Science Festival presentation of the Rap Guide to Evolution featuring Dr. Helen Fisher, Dr. Stuart Firestein, and Dr. Heather Berlin providing a live post-show peer-review talkback. Use the code “Darwin” for discount tickets.

Finally, on Sunday the Canterbury Tales Remixed will have its first off-Broadway performance since early 2012, tracing the evolution of storytelling from Gilgamesh to Slick Rick via Chaucer’s masterpiece. Use the code “Tales” for discount tickets.

We run until June 23rd, Fri/Sat/Sun. …

The Evolutionary Tales website offers a bit more information about each show,

 INGENIOUS NATURE
Evolutionary psychology, sex differences in behavior, and the modern dating scene. Can science serve as a road map to romance? It turns out, ovulation studies make for awkward first-date conversation.
Fridays 8pm, May 31 – June 21
RAP GUIDE TO EVOLUTION
Rap Guide to Evolution interprets Charles Darwin’s theory of evolution for the hip-hop age, exploring the links between bling and peacocks’ tails, gangster rap music and elephant seals, and the species-wide appeal of Afro-centricity.
Saturdays 8pm, June 1 – 22
CANTERBURY TALES REMIXED
Canterbury Tales Remixed brings you a collection of the world’s best-loved stories, supplementing Chaucer’s masterful character-driven Tales with artful re-tellings of the epics of Beowulf and Gilgamesh. The evolution of storytelling!
Sundays 5pm, June 2 – 23
115 Macdougal Street, NYC
Ticket Info (866) 811-4111

This link will take you to the calendar where you select the show(s) you’d like to attend and click through to purchase one or more tickets.

I’m fascinated by the idea of live science peer-review onstage. I imagine that too is a world first, along with the hip-hop theatre cycle. I wish Baba and all his collaborators the best of luck.

Proteins which cause Alzheimer’s disease can be used to grow functionalized nanowires

This is the first time I’ve ever heard of anything good resulting from Alzheimer’s Disease (even if it’s tangential). From the May 24, 2013 news item on ScienceDaily,

Prof. Sakaguchi and his team in Graduate School of Science, Hokkaido University,jointly with MANA PI Prof. Kohei Uosaki and a research group from the University of California, Santa Barbara, have successfully developed a new technique for efficiently creating functionalized nanowires for the first time ever.

The group focused on the natural propensity of amyloid peptides, molecules which are thought to cause Alzheimer’s disease, to self-assemble into nanowires in an aqueous solution and controlled this molecular property to achieve their feat.

The May 23, 2013 National Institute for Materials press release, which originated the news item, offers insight into why functionalized nanowires are devoutly desired,

Functionalized nanowires are extremely important in the construction of nanodevices because they hold promise for use as integrated circuits and for the generation of novel properties, such as conductivity, catalysts and optical properties which are derived from their fine structure. However, some have remarked on the technical and financial limitations of the microfabrication technology required to create these structures. Meanwhile, molecular self-organization and functionalization have attracted attention in the field of next-generation nanotechnology development. Amyloid peptides, which are thought to cause Alzheimer’s disease, possess the ability to self-assemble into highly stable nanowires in an aqueous solution. Focusing on this, the research team became the first to successfully develop a new method for efficiently creating a multifunctional nanowire by controlling this molecular property.

The team designed a new peptide called SCAP, or structure-controllable amyloid peptide, terminated with a three-amino-acid-residue cap. By combining multiple SCAPs with different caps, the team found that self-organization is highly controlled at the molecular level. Using this new control method, the team formed a molecular nanowire with the largest aspect ratio ever achieved. In addition, they made modifications using various functional molecules including metals, semiconductors and biomolecules that successfully produced an extremely high quality functionalized nanowire. Going forward, this method is expected to contribute significantly to the development of new nanodevices through its application to a wide range of functional nanomaterials with self-organizing properties.

You can find the published paper here,

Formation of Functionalized Nanowires by Control of Self-Assembly Using Multiple Modified Amyloid Peptides by Hiroki Sakai, Ken Watanabe, Yuya Asanomi, Yumiko Kobayashi, Yoshiro Chuman, Lihong Shi, Takuya Masuda, Thomas Wyttenbach, Michael T. Bowers, Kohei Uosaki, & Kazuyasu Sakaguchi1. Advanced Functional Materials. doi: 10.1002/adfm.201300577 Article first published online: 23 APR 2013

The study is behind a paywall.

I have written about nanowires before and, in keeping with today’s theme of peculiar relationships  (Alzheimer’s disease), prior to this, the most unusual nanowire item I’ve come across had to do with growing them to the sounds  of music. From the Nanotech Mysteries (wiki), Scientists get musical page (Note: Footnotes have been removed),

After testing Deep Purple’s ‘Smoke on the Water‘, Chopin’s ‘Nocturne Opus 9 no. 1‘, Josh Abraham’s ‘Addicted to Bass‘, Rammstein’s ‘Das Model‘, and Abba’s ‘Dancing Queen‘, David Parlevliet found that music can be used to grow nanowires but they will be kinky.

Scientists want to grow straight nanowires and one of the popular methods is to “[blast] a voltage through silane gas to produce a plasma that pulses on and off at 1000 times a second. Over time the process enables molecules from the gas to deposit on a glass slide in the form of a mesh of crystalline silicon nanowires.”

Parlevliet, a PhD student at Murdoch University in Perth, Australia, plugged in a music player instead of a pulse generator usually used for this purpose and observed the results. While there are no current applications for kinky nanowires, the Deep Purple music created the densest mesh. Rammstein’s music grew nanowires the least successfully. In his presentation to the Australian Research Council Nanotechnology Network Symposium in March 2008, Parlevliet concluded that music could become more important for growing nanowires if applications can be found for the kinky ones.