Tag Archives: audio

Fish DJ makes discoveries about fish hearing

A March 2, 2021 University of Queensland press release (also on EurekAlert) announces research into how fish brains develop and how baby fish hear,

A DJ-turned-researcher at The University of Queensland has used her knowledge of cool beats to understand brain networks and hearing in baby fish

The ‘Fish DJ’ used her acoustic experience to design a speaker system for zebrafish larvae and discovered that their hearing is considerably better than originally thought.

This video clip features zebrafish larvae listening to music, MC Hammer’s ‘U Can’t Touch This’ (1990),

Here’s the rest of the March 2, 2021 University of Queensland press release,

PhD candidate Rebecca Poulsen from the Queensland Brain Institute said that combining this new speaker system with whole-brain imaging showed how larvae can hear a range of different sounds they would encounter in the wild.

“For many years my music career has been in music production and DJ-ing — I’ve found underwater acoustics to be a lot more complicated than air frequencies,” Ms Poulsen said.

“It is very rewarding to be using the acoustic skills I learnt in my undergraduate degree, and in my music career, to overcome the challenge of delivering sounds to our zebrafish in the lab.

“I designed the speaker to adhere to the chamber the larvae are in, so all the sound I play is accurately received by the larvae, with no loss through the air.”

Ms Poulsen said people did not often think about underwater hearing, but it was crucial for fish survival – to escape predators, find food and communicate with each other.

Ms Poulsen worked with Associate Professor Ethan Scott, who specialises in the neural circuits and behaviour of sensory processing, to study the zebrafish and find out how their neurons work together to process sounds.

The tiny size of the zebrafish larvae allows researchers to study their entire brain under a microscope and see the activity of each brain cell individually.

“Using this new speaker system combined with whole brain imaging, we can see which brain cells and regions are active when the fish hear different types of sounds,” Dr Scott said.

The researchers are testing different sounds to see if the fish can discriminate between single frequencies, white noise, short sharp sounds and sound with a gradual crescendo of volume.

These sounds include components of what a fish would hear in the wild, like running water, other fish swimming past, objects hitting the surface of the water and predators approaching.

“Conventional thinking is that fish larvae have rudimentary hearing, and only hear low-frequency sounds, but we have shown they can hear relatively high-frequency sounds and that they respond to several specific properties of diverse sounds,” Dr Scott said.

“This raises a host of questions about how their brains interpret these sounds and how hearing contributes to their behaviour.”

Ms Poulsen has played many types of sounds to the larvae to see which parts of their brains light up, but also some music – including MC Hammer’s “U Can’t Touch This”– that even MC Hammer himself enjoyed.

The March 3, 3021 story by Graham Readfearn originally published by The Guardian (also found on MSN News), has more details about the work and the researcher,

As Australia’s first female dance music producer and DJ, Rebecca Poulsen – aka BeXta – is a pioneer, with scores of tracks, mixes and hundreds of gigs around the globe under her belt.

But between DJ gigs, the 46-year-old is now back at university studying neuroscience at Queensland Brain Institute at the University of Queensland in Brisbane.

And part of this involves gently securing baby zebrafish inside a chamber and then playing them sounds while scanning their brains with a laser and looking at what happens through a microscope.

The analysis for the study doesn’t look at how the fish larvae react during Hammer [MC Hammer] time, but how their brain cells react to simple single-frequency sounds.

“It told us their hearing range was broader than we thought it was before,” she says.

Poulsen also tried more complex sounds, like white noise and “frequency sweeps”, which she describes as “like the sound when Wile E Coyote falls off a cliff” in the Road Runner cartoons.

“When you look at the neurons that light up at each sound, they’re unique. The fish can tell the difference between complex and different sounds.”

This is, happily, where MC Hammer comes in.

Out of professional and scientific curiosity – and also presumably just because she could – Poulsen played music to the fish.

She composed her own piece of dance music and that did seem to light things up.

But what about U Can’t Touch This?

“You can see when the vocal goes ‘ohhh-oh’, specific neurons light up and you can see it pulses to the beat. To me it looks like neurons responding to different parts of the music.

“I do like the track. I was pretty little when it came out and I loved it. I didn’t have the harem pants, though, but I did used to do the dance.”

How do you stop the fish from swimming away while you play them sounds? And how do you get a speaker small enough to deliver different volumes and frequencies without startling the fish?

For the first problem, the baby zebrafish – just 3mm long – are contained in a jelly-like substance that lets them breathe “but stops them from swimming away and keeps them nice and still so we can image them”.

For the second problem, Poulsen and colleagues used a speaker just 1cm wide and stuck it to the glass of the 2cm-cubed chamber the fish was contained in.

Using fish larvae has its advantages. “They’re so tiny we can see their whole brain … we can see the whole brain live in real time.”

If you have the time, I recommend reading Readfearn’s March 3, 3021 story in its entirety.

Poulsen as Bexta has a Wikipedia entry and I gather from Readfearn’s story that she is still active professionally.

Here’s a link to and a citation for the published paper,

Broad frequency sensitivity and complex neural coding in the larval zebrafish auditory system by Rebecca E. Poulsen, Leandro A. Scholz, Lena Constantin, Itia Favre-Bulle, Gilles C. Vanwalleghem, Ethan K. Scott. Current Biology DOI:https://doi.org/10.1016/j.cub.2021.01.103 Published: March 02, 2021

This paper appears to be open access.

There is an earlier version of the paper on bioRxiv made available for open peer review. Oddly, I don’t see any comments but perhaps I need to login.

Related research but not the same

I was surprised when a friend of mine in early January 2021 needed to be persuaded that noise in aquatic environments is a problem. If you should have any questions or doubts, perhaps this March 4, 2021 article by Amy Noise (that is her name) on the Research2Reality website can answer them,

Ever had builders working next door? Or a neighbour leaf blowing while you’re trying to make a phone call? Unwanted background noise isn’t just stressful, it also has tangible health impacts – for both humans and our marine cousins.

Sound travels faster and farther in water than in air. For marine creatures who rely heavily on sound, crowded ocean soundscapes could be more harmful than previously thought.

Marine animals use sound to navigate, communicate, find food and mates, spot predators, and socialize. But since the Industrial Revolution, humans have made the planet, and the oceans in particular, exponentially noisier.

From shipping and fishing, to mining and sonar, underwater anthropogenic noise is becoming louder and more prevalent. While parts of the ocean’s chorus are being drowned out, others are being permanently muted through hunting and habitat loss.

[An] international team, including University of Victoria biologist Francis Juanes, reviewed over 10,000 papers from the past 40 years. They found overwhelming evidence that anthropogenic noise is negatively impacting marine animals.

Getting back to Poulsen and Queensland, her focus is on brain development not noise although I imagine some of her work may be of use to researchers investigating anthropogenic noise and its impact on aquatic life.

Two online events: Wednesday, May 20, 2020 and Saturday, May 23, 2020

My reference point for date and time is almost always Pacific Time (PT). Depending on which time zone you live in, the day and date I’ve listed here may be incorrect. For anyone who has difficulty figuring out which day and time the event will take place where they live, a search for ‘time zone converter’ on one of the search engines should prove helpful.

May 20, 2020 at 7:30 pm (UK time): Complicité’s The Encounter

I received this May 19, 2020 announcement from The Space via email,

Over 80,000 people have watched Complicité’s award-winning production of The Encounter online and now the recording has been made available again – for one week only – in this revival, supported by The Space. You can watch online via the website or YouTube channel [from15 May until 22 May 2020.].

🎧 Enjoy the binaural sound – Make sure you wear headphones in order to experience the show’s impressive binaural sound design – any headphone will work, but playing out of computer speakers will not give the same effect. 

Join in a live Q&A – 20 May [2020] – A live discussion event and public Q&A will take place on Wednesday 20 May at 7:30pm (11:30 am PT) with Simon McBurney and guests including filmmaker Takumã Kuikuro (via a link to the Xingu region of the Amazon). Register to join the discussion.

Here’s a little more about the video performance from The Space’s Complicité invites you to The Encounter webpage,

In The Encounter, Director-performer Simon McBurney brings Petru Popescu’s book Amazon Beaming to life on stage.

The show follows the journey of Loren McIntyre, a photographer who got lost in Brazil’s remote Javari Valley in 1969.

It uses live and recorded 3D sound, video projections and loop pedals to recreate the intense atmosphere of the rainforest.

In the first live-streamed production ever to use 3D sound, viewers got the chance to experience the atmosphere of one of the strangest and most beautiful places on Earth – all through their headphones.

Complicité is a UK-based touring theatre company known for its imaginative original productions and adaptations of classic books and plays, and its groundbreaking use of technology. The Encounter is directed and performed by Simon McBurney, co-director is Kirsty Housley.

The Encounter is a little over two hours long.

Saturday, May 23, 2020 from 12 pm – 1:30 pm ET: Pandemic Encounters ::: being [together] in the deep third space

This May 19, 2020 announcement was received via email from the ArtSci Salon, one of the participants in this ‘encounter’, Note: I have made some changes to the formatting,

LEONARDO/ISAST and The Third Space Network announce the first Global LASER: Pandemic Encounters ::: being [together] in the deep third space on Saturday, May 23, 12-1:30pm EDT. This online performance installation is a creation of pioneering telematic artist Paul Sermon in collaboration with Randall Packer, Gregory Kuhn and the Third Space Network. (Locate your time zone)

Pandemic Encounters explores the implications of the migratory transition to the virtual space we are all experiencing. Even when we return to the so-called normal, we will be changed: when social interaction, human engagement, and being together will have undergone a radical transformation. In this new work, Paul Sermon performs as a live chroma-figure in a deep third space audio-visual networked environment, encountering pandemic spaces & action-performers from around the world – artists, musicians, dancers, media practitioners & scientists  – a collective response to a global pandemic that has triggered an unfolding metamorphosis of the human condition.

action-performers: Annie Abrahams (France), Clarissa Ribeiro (Brazil), Roberta Buiani (Canada), Andrew Denton (New Zealand), Bhavani Esapathi (UK), Tania Fraga (Brazil), Satinder Gill (US), Birgitta Hosea (UK), Charles Lane (US), Ng Wen Lei (Singapore), Marilene Oliver (Canada), Serena Pang (Singapore), Daniel Pinheiro (Portugal), Olga Remneva (Russia), Toni Sant (UK), Rejane Spitz (Brazil), Atau Tanaka (UK)

For more informationhttps://thirdspacenetwork.com/pandemic-encounters/

REGISTER & SAVE YOUR SPOT

Here’s more about the presentation partners,

The Third Space Network, created by Randall Packer, is an artist-driven Internet platform for staging creative dialogue, live performance and uncategorizable activisms: social empowerment through the act of becoming our own broadcast media.

Leonardo/The International Society for the Arts, Sciences and Technology (Leonardo/ISAST) is a nonprofit organization that serves the global network of distinguished scholars, artists, scientists, researchers and thinkers through our programs, which focus on interdisciplinary work, creative output and innovation.

Global LASER is a new series of networked events that bring together artists, scientists, and technologists in the creation of experimental forms of live Internet performance and creative dialogue.

Because I love the poster image for this event,

[downloaded from https://thirdspacenetwork.com/pandemic-encounters/]

Monitoring forest soundscapes for conservation and more about whale songs

I don’t understand why anyone would publicize science work featuring soundscapes without including an audio file. However, no one from Princeton University (US) phoned and asked for my advice :).

On the plus side, my whale story does have a sample audio file. However, I’m not sure if I can figure out how to embed it here.

Princeton and monitoring forests

In addition to a professor from Princeton University, there’s the founder of an environmental news organization and someone who’s both a professor at the University of Queensland (Australia) and affiliated with the Nature Conservancy making this of the more unusual collaborations I’ve seen.

Moving on to the news, a January 4, 2019 Princeton University news release (also on EurekAlert but published on Jan. 3, 2019) by B. Rose Kelly announces research into monitoring forests,

Recordings of the sounds in tropical forests could unlock secrets about biodiversity and aid conservation efforts around the world, according to a perspective paper published in Science.

Compared to on-the-ground fieldwork, bioacoustics –recording entire soundscapes, including animal and human activity — is relatively inexpensive and produces powerful conservation insights. The result is troves of ecological data in a short amount of time.

Because these enormous datasets require robust computational power, the researchers argue that a global organization should be created to host an acoustic platform that produces on-the-fly analysis. Not only could the data be used for academic research, but it could also monitor conservation policies and strategies employed by companies around the world.

“Nongovernmental organizations and the conservation community need to be able to truly evaluate the effectiveness of conservation interventions. It’s in the interest of certification bodies to harness the developments in bioacoustics for better enforcement and effective measurements,” said Zuzana Burivalova, a postdoctoral research fellow in Professor David Wilcove’s lab at Princeton University’s Woodrow Wilson School of Public and International Affairs.

“Beyond measuring the effectiveness of conservation projects and monitoring compliance with forest protection commitments, networked bioacoustic monitoring systems could also generate a wealth of data for the scientific community,” said co-author Rhett Butler of the environmental news outlet Mongabay.

Burivalova and Butler co-authored the paper with Edward Game, who is based at the Nature Conservancy and the University of Queensland.

The researchers explain that while satellite imagery can be used to measure deforestation, it often fails to detect other subtle ecological degradations like overhunting, fires, or invasion by exotic species. Another common measure of biodiversity is field surveys, but those are often expensive, time consuming and cover limited ground.

Depending on the vegetation of the area and the animals living there, bioacoustics can record animal sounds and songs from several hundred meters away. Devices can be programmed to record at specific times or continuously if there is solar polar or a cellular network signal. They can also record a range of taxonomic groups including birds, mammals, insects, and amphibians. To date, several multiyear recordings have already been completed.

Bioacoustics can help effectively enforce policy efforts as well. Many companies are engaged in zero-deforestation efforts, which means they are legally obligated to produce goods without clearing large forests. Bioacoustics can quickly and cheaply determine how much forest has been left standing.

“Companies are adopting zero deforestation commitments, but these policies do not always translate to protecting biodiversity due to hunting, habitat degradation, and sub-canopy fires. Bioacoustic monitoring could be used to augment satellites and other systems to monitor compliance with these commitments, support real-time action against prohibited activities like illegal logging and poaching, and potentially document habitat and species recovery,” Butler said.

Further, these recordings can be used to measure climate change effects. While the sounds might not be able to assess slow, gradual changes, they could help determine the influence of abrupt, quick differences to land caused by manufacturing or hunting, for example.

Burivalova and Game have worked together previously as you can see in a July 24, 2017 article by Justine E. Hausheer for a nature.org blog ‘Cool Green Science’ (Note: Links have been removed),

Morning in Musiamunat village. Across the river and up a steep mountainside, birds-of-paradise call raucously through the rainforest canopy, adding their calls to the nearly deafening insect chorus. Less than a kilometer away, small birds flit through a grove of banana trees, taro and pumpkin vines winding across the rough clearing. Here too, the cicadas howl.

To the ear, both garden and forest are awash with noise. But hidden within this dawn chorus are clues to the forest’s health.

New acoustic research from Nature Conservancy scientists indicates that forest fragmentation drives distinct changes in the dawn and dusk choruses of forests in Papua New Guinea. And this innovative method can help evaluate the conservation benefits of land-use planning efforts with local communities, reducing the cost of biodiversity monitoring in the rugged tropics.

“It’s one thing for a community to say that they cut fewer trees, or restricted hunting, or set aside a protected area, but it’s very difficult for small groups to demonstrate the effectiveness of those efforts,” says Eddie Game, The Nature Conservancy’s lead scientist for the Asia-Pacific region.

Aside from the ever-present logging and oil palm, another threat to PNG’s forests is subsistence agriculture, which feeds a majority of the population. In the late 1990s, The Nature Conservancy worked with 11 communities in the Adelbert Mountains to create land-use plans, dividing each community’s lands into different zones for hunting, gardening, extracting forest products, village development, and conservation. The goal was to limit degradation to specific areas of the forest, while keeping the rest intact.

But both communities and conservationists needed a way to evaluate their efforts, before the national government considered expanding the program beyond Madang province. So in July 2015, Game and two other scientists, Zuzana Burivalova and Timothy Boucher, spent two weeks gathering data in the Adelbert Mountains, a rugged lowland mountain range in Papua New Guinea’s Madang province.

Working with conservation rangers from Musiamunat, Yavera, and Iwarame communities, the research team tested an innovative method — acoustic sampling — to measure biodiversity across the community forests. Game and his team used small acoustic recorders placed throughout the forest to record 24-hours of sound from locations in each of the different land zones.

Soundscapes from healthy, biodiverse forests are more complex, so the scientists hoped that these recordings would show if parts of the community forests, like the conservation zones, were more biodiverse than others. “Acoustic recordings won’t pick up every species, but we don’t need that level of detail to know if a forest is healthy,” explains Boucher, a conservation geographer with the Conservancy.

Here’s a link to and a citation for the latest work from Burivalova and Game,

The sound of a tropical forest by Zuzana Burivalova, Edward T. Game, Rhett A. Butler. Science 04 Jan 2019: Vol. 363, Issue 6422, pp. 28-29 DOI: 10.1126/science.aav1902

This paper is behind a paywall. You can find out more about Mongabay and Rhett Butler in its Wikipedia entry.

***ETA July 18, 2019: Cara Cannon Byington, Associate Director, Science Communications for the Nature Conservancy emailed to say that a January 3, 2019 posting on the conservancy’s Cool Green Science Blog features audio files from the research published in ‘The sound of a tropical forest. Scroll down about 75% of the way for the audio.***

Whale songs

Whales share songs when they meet and a January 8, 2019 Wildlife Conservation Society news release (also on EurekAlert) describes how that sharing takes place,

Singing humpback whales from different ocean basins seem to be picking up musical ideas from afar, and incorporating these new phrases and themes into the latest song, according to a newly published study in Royal Society Open Science that’s helping scientists better understand how whales learn and change their musical compositions.

The new research shows that two humpback whale populations in different ocean basins (the South Atlantic and Indian Oceans) in the Southern Hemisphere sing similar song types, but the amount of similarity differs across years. This suggests that males from these two populations come into contact at some point in the year to hear and learn songs from each other.

The study titled “Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins” appears in the latest edition of the Royal Society Open Science journal. The authors are: Melinda L. Rekdahl, Carissa D. King, Tim Collins, and Howard Rosenbaum of WCS (Wildlife Conservation Society); Ellen C. Garland of the University of St. Andrews; Gabriella A. Carvajal of WCS and Stony Brook University; and Yvette Razafindrakoto of COSAP [ (Committee for the Management of the Protected Area of Bezà Mahafaly ] and Madagascar National Parks.

“Song sharing between populations tends to happen more in the Northern Hemisphere where there are fewer physical barriers to movement of individuals between populations on the breeding grounds, where they do the majority of their singing. In some populations in the Southern Hemisphere song sharing appears to be more complex, with little song similarity within years but entire songs can spread to neighboring populations leading to song similarity across years,” said Dr. Melinda Rekdahl, marine conservation scientist for WCS’s Ocean Giants Program and lead author of the study. “Our study shows that this is not always the case in Southern Hemisphere populations, with similarities between both ocean basin songs occurring within years to different degrees over a 5-year period.”

The study authors examined humpback whale song recordings from both sides of the African continent–from animals off the coasts of Gabon and Madagascar respectively–and transcribed more than 1,500 individual sounds that were recorded between 2001-2005. Song similarity was quantified using statistical methods.

Male humpback whales are one of the animal kingdom’s most noteworthy singers, and individual animals sing complex compositions consisting of moans, cries, and other vocalizations called “song units.” Song units are composed into larger phrases, which are repeated to form “themes.” Different themes are produced in a sequence to form a song cycle that are then repeated for hours, or even days. For the most part, all males within the same population sing the same song type, and this population-wide song similarity is maintained despite continual evolution or change to the song leading to seasonal “hit songs.” Some song learning can occur between populations that are in close proximity and may be able to hear the other population’s song.

Over time, the researchers detected shared phrases and themes in both populations, with some years exhibiting more similarities than others. In the beginning of the study, whale populations in both locations shared five “themes.” One of the shared themes, however, had differences. Gabon’s version of Theme 1, the researchers found, consisted of a descending “cry-woop”, whereas the Madagascar singers split Theme 1 into two parts: a descending cry followed by a separate woop or “trumpet.”

Other differences soon emerged over time. By 2003, the song sung by whales in Gabon became more elaborate than their counterparts in Madagascar. In 2004, both population song types shared the same themes, with the whales in Gabon’s waters singing three additional themes. Interestingly, both whale groups had dropped the same two themes from the previous year’s song types. By 2005, songs being sung on both sides of Africa were largely similar, with individuals in both locations singing songs with the same themes and order. However, there were exceptions, including one whale that revived two discontinued themes from the previous year.

The study’s results stands in contrast to other research in which a song in one part of an ocean basin replaces or “revolutionizes” another population’s song preference. In this instance, the gradual changes and degrees of similarity shared by humpbacks on both sides of Africa was more gradual and subtle.

“Studies such as this one are an important means of understanding connectivity between different whale populations and how they move between different seascapes,” said Dr. Howard Rosenbaum, Director of WCS’s Ocean Giants Program and one of the co-authors of the new paper. “Insights on how different populations interact with one another and the factors that drive the movements of these animals can lead to more effective plans for conservation.”

The humpback whale is one of the world’s best-studied marine mammal species, well known for its boisterous surface behavior and migrations stretching thousands of miles. The animal grows up to 50 feet in length and has been globally protected from commercial whaling since the 1960s. WCS has studied humpback whales since that time and–as the New York Zoological Society–played a key role in the discovery that humpback whales sing songs. The organization continues to study humpback whale populations around the world and right here in the waters of New York; research efforts on humpback and other whales in New York Bight are currently coordinated through the New York Aquarium’s New York Seascape program.

I’m not able to embed the audio file here but, for the curious, there is a portion of a humpback whale song from Gabon here at EurekAlert.

Here’s a link to and a citation for the research paper,

Culturally transmitted song exchange between humpback whales (Megaptera novaeangliae) in the southeast Atlantic and southwest Indian Ocean basins by Melinda L. Rekdahl, Ellen C. Garland, Gabriella A. Carvajal, Carissa D. King, Tim Collins, Yvette Razafindrakoto and Howard Rosenbaum. Royal Society Open Science 21 November 2018 Volume 5 Issue 11 https://doi.org/10.1098/rsos.172305 Published:28 November 2018

This is an open access paper.

Twinkle, Twinkle Little Star (song) could lead to better data storage

A March 16, 2015 news item on Nanowerk features research from the University of Illinois and the song ‘Twinkle, Twinkle Little Star’,

Researchers from the University of Illinois at Urbana-Champaign have demonstrated the first-ever recording of optically encoded audio onto a non-magnetic plasmonic nanostructure, opening the door to multiple uses in informational processing and archival storage.

“The chip’s dimensions are roughly equivalent to the thickness of human hair,” explained Kimani Toussaint, an associate professor of mechanical science and engineering, who led the research.

Specifically, the photographic film property exhibited by an array of novel gold, pillar-supported bowtie nanoantennas (pBNAs)–previously discovered by Toussaint’s group–was exploited to store sound and audio files. Compared with the conventional magnetic film for analog data storage, the storage capacity of pBNAs is around 5,600 times larger, indicating a vast array of potential storage uses.

The researchers have provide a visual image illustrating their work,

Nano piano concept: Arrays of gold, pillar-supported bowtie nanoantennas (bottom left) can be used to record distinct musical notes, as shown in the experimentally obtained dark-field microscopy images (bottom right). These particular notes were used to compose 'Twinkle, Twinkle, Little Star.'  Courtesy of University of Illinois at Urbana-Champaign

Nano piano concept: Arrays of gold, pillar-supported bowtie nanoantennas (bottom left) can be used to record distinct musical notes, as shown in the experimentally obtained dark-field microscopy images (bottom right). These particular notes were used to compose ‘Twinkle, Twinkle, Little Star.’ Courtesy of University of Illinois at Urbana-Champaign

A March 16, 2015 University of Illinois at Urbana-Champaign news release (also on EurekAlert), which originated the news item, describes the research in more detail (Note: Links have been removed),

To demonstrate its abilities to store sound and audio files, the researchers created a musical keyboard or “nano piano,” using the available notes to play the short song, “Twinkle, Twinkle, Little Star.”

“Data storage is one interesting area to think about,” Toussaint said. “For example, one can consider applying this type of nanotechnology to enhancing the niche, but still important, analog technology used in the area of archival storage such as using microfiche. In addition, our work holds potential for on-chip, plasmonic-based information processing.”

The researchers demonstrated that the pBNAs could be used to store sound information either as a temporally varying intensity waveform or a frequency varying intensity waveform. Eight basic musical notes, including middle C, D, and E, were stored on a pBNA chip and then retrieved and played back in a desired order to make a tune.

“A characteristic property of plasmonics is the spectrum,” said Hao Chen, a former postdoctoral researcher in Toussaint’s PROBE laboratory and the first author of the paper, “Plasmon-Assisted Audio Recording,” appearing in the Nature Publishing Group’s Scientific Reports. “Originating from a plasmon-induced thermal effect, well-controlled nanoscale morphological changes allow as much as a 100-nm spectral shift from the nanoantennas. By employing this spectral degree-of-freedom as an amplitude coordinate, the storage capacity can be improved. Moreover, although our audio recording focused on analog data storage, in principle it is still possible to transform to digital data storage by having each bowtie serve as a unit bit 1 or 0. By modifying the size of the bowtie, it’s feasible to further improve the storage capacity.”

The team previously demonstrated that pBNAs experience reduced thermal conduction in comparison to standard bowtie nanoantennas and can easily get hot when irradiated by low-powered laser light. Each bowtie antenna is approximately 250 nm across in dimensions, with each supported on 500-nm tall silicon dioxide posts. A consequence of this is that optical illumination results in subtle melting of the gold, and thus a change in the overall optical response. This shows up as a difference in contrast under white-light illumination.

“Our approach is analogous to the method of ‘optical sound,’ which was developed circa 1920s as part of the effort to make ‘talking’ motion pictures,” the team said in its paper. “Although there were variations of this process, they all shared the same basic principle. An audio pickup, e.g., a microphone, electrically modulates a lamp source. Variations in the intensity of the light source is encoded on semi-transparent photographic film (e.g., as variation in area) as the film is spatially translated. Decoding this information is achieved by illuminating the film with the same light source and picking up the changes in the light transmission on an optical detector, which in turn may be connected to speakers. In the work that we present here, the pBNAs serve the role of the photographic film which we can encode with audio information via direct laser writing in an optical microscope.”

In their approach, the researchers record audio signals by using a microscope to scan a sound-modulated laser beam directly on their nanostructures. Retrieval and subsequent playback is achieved by using the same microscope to image the recorded waveform onto a digital camera, whereby simple signal processing can be performed.

Here’s a link to and a citation for the paper,

Plasmon-Assisted Audio Recording by Hao Chen, Abdul M. Bhuiya, Qing Ding, & Kimani C. Toussaint, Jr. Scientific Reports 5, Article number: 9125 doi:10.1038/srep09125 Published 16 March 2015

This is an open access paper and here is a sample recording courtesy of the researchers and the University of Illinois at Urbana-Champaign,

The human body as a musical instrument: performance at the University of British Columbia on April 10, 2014

It’s called The Bang! Festival of interactive music with performances of one kind or another scheduled throughout the day on April 10, 2014 (12 pm: MUSC 320; 1:30 PM: Grad Work; 2 pm: Research) and a finale featuring the Laptop Orchestra at 8 pm at the University of British Columbia’s (UBC) School of Music (Barnett Recital Hall on the Vancouver campus, Canada).

Here’s more about Bob Pritchard, professor of music, and the students who have put this programme together (from an April 7, 2014 UBC news release; Note: Links have been removed),

Pritchard [Bob Prichard], a professor of music at the University of British Columbia, is using technologies that capture physical movement to transform the human body into a musical instrument.

Pritchard and the music and engineering students who make up the UBC Laptop Orchestra wanted to inject more human performance in digital music after attending one too many uninspiring laptop music sets. “Live electronic music can be a bit of an oxymoron,” says Pritchard, referring to artists gazing at their laptops and a heavy reliance on backing tracks.

“Emerging tools and techniques can help electronic musicians find more creative and engaging ways to present their work. What results is a richer experience, which can create a deeper, more emotional connection with your audience.”

The Laptop Orchestra, which will perform a free public concert on April 10, is an extension of a music technology course at UBC’s School of Music. Comprised of 17 students from Arts, Science and Engineering, its members act as musicians, dancers, composers, programmers and hardware specialists. They create adventurous electroacoustic music using programmed and acoustic instruments, including harp, piano, clarinet and violin.

Despite its name, surprisingly few laptops are actually touched onstage. “That’s one of our rules,” says Pritchard, who is helping to launch UBC’s new minor degree in Applied Music Technology in September with Laptop Orchestra co-director Keith Hamel. “Avoid touching the laptop!”

Instead, students use body movements to trigger programmed synthetic instruments or modify the sound of their live instruments in real-time. They strap motion sensors to their bodies and instruments, play wearable iPhone instruments, swing Nintendo Wiis or PlayStation Moves, while Kinect video cameras from Sony Xboxes track their movements.

“Adding movement to our creative process has been awesome,” says Kiran Bhumber, a fourth-year music student and clarinet player. The program helped attract her back to Vancouver after attending a performing arts high school in Toronto. “I really wanted to do something completely different. When I heard of the Laptop Orchestra, I knew it was perfect for me. I begged Bob to let me in.”

The Laptop Orchestra has partnered itself with UBC’s Dept. of Computer and Electrical Engineering (from the news release),

The engineers come with expertise in programming and wireless systems and the musicians bring their performance and composition chops, and program code as well.

Besides creating their powerful music, the students have invented a series of interfaces and musical gadgets. The first is the app sensorUDP, which transforms musicians’ smartphones into motion sensors. Available in the Android app store and compatible with iPhones, it allows performers to layer up to eight programmable sounds and modify them by moving their phone.

Music student Pieteke MacMahon modified the app to create an iPhone Piano, which she plays on her wrist, thanks to a mount created by engineering classmates. As she moves her hands up, the piano notes go up in pitch. When she drops her hands, the sound gets lower, and a delay effect increases if her palm faces up. “Audiences love how intuitive it is,” says the composition major. “It creates music in a way that really makes sense to people, and it looks pretty cool onstage.”

Here’s a video of the iPhone Piano (aka PietekeIPhoneSensor) in action,

The members of the Laptop Orchestra have travelled to collaborate internationally (Note: Links have been removed),

Earlier this year, the ensemble’s unique music took them to Europe. The class spent 10 days this February in Belgium where they collaborated and performed in concert with researchers at the University of Mons, a leading institution for research on gesture-tracking technology.

The Laptop Orchestra’s trip was sponsored by UBC’s Go Global and Arts Research Abroad, which together send hundreds of students on international learning experiences each year.

In Belgium, the ensemble’s dancer Diana Brownie wore a body suit covered head-to-toe in motion sensors as part of a University of Mons research project on body movement. The researchers – one a former student of Pritchard’s – will use the suit’s data to help record and preserve cultural folk dances.

For anyone who needs directions, here’s a link to UBC’s Vancouver Campus Maps, Directions, & Tours webpage.

WHALE of a concert on the edge of Hudson Bay (northern Canada) and sounds of icebergs from Oregon State University

Both charming and confusing (to me), the WHALE project features two artists (or is it musicians?) singing to and with beluga whales using a homemade underwater sound system while they all float on or in Hudson Bay. There’s a July 10, 2013 news item about the project on the CBC (Canadian Broadcasting Corporation) news website,

What began as an interest in aquatic culture for Laura Magnusson and Kaoru Ryan Klatt has turned into a multi-year experimental project that brings art to the marine mammals.

Since 2011, Magnusson and Klatt have been taking a boat onto the Churchill River, which flows into Hudson Bay, with a home-made underwater sound system.

….

Last week, the pair began a 75-day expedition that involves travelling aboard a special “sculptural sea vessel” to “build a sustained but non-invasive presence to foster bonds between humans and whales,” according to the project’s website.

Ten other musicians and interdisciplinary artists are joining Klatt and Magnusson to perform new works they’ve created specifically for the whales.

The latest expedition will be the focus of Becoming Beluga, a feature film that Klatt is directing.

Magnusson and Klatt are also testing a high-tech “bionic whale suit” that would enable the wearer to swim and communicate like a beluga whale.

Klatt has produced a number of WHALE videos including this one (Note: This not a slick production nor were any of the others I viewed on YouTube),

In addition to not being slick, there’s a quirky quality to this project video that I find charming and interesting.

My curiosity aroused, I also visited Magnusson’s and Klatt’s WHALE website and found this project description,

WHALE is an interdisciplinary art group comprised of Winnipeg-based artists Kaoru Ryan Klatt and Laura Magnusson. Their vision is to expand art and culture beyond human boundaries to non-human beings. Since 2011, they have been traveling to the northern edge of Manitoba, Canada to forge connections with thousands of beluga whales. From a canoe on the Churchill River, they have collaborated with these whales through sound, movement, and performative action. Now, aboard the SSV Cetus – a specially crafted sculptural sea vessel – they will embark on a 75-day art expedition throughout the Churchill River estuary, working to build a sustained but non-invasive presence to foster bonds between humans and whales. This undertaking – Becoming Beluga – is the culmination of a three-year integrated arts project with the belugas of this region, taking place between July 2 and September 14, 2013.

While the word ‘artist’ suggests visual arts rather than musical arts what I find a little more confounding is that this is not being described an art/science or art/technology project as these artists are clearly developing technology with their underwater sound system, sculptural sea vessel, and bionic whale suit. In any event, I wish them good luck with WHALE and their Becoming Beluga film.

In a somewhat related matter and for those interested in soundscapes and the ocean (in Antarctica), there is some research from Oregon State University which claims that melting icebergs make a huge din. From a July 11, 2013 news item on phys.org,

There is growing concern about how much noise humans generate in marine environments through shipping, oil exploration and other developments, but a new study has found that naturally occurring phenomena could potentially affect some ocean dwellers.

Nowhere is this concern greater than in the polar regions, where the effects of global warming often first manifest themselves. The breakup of ice sheets and the calving and grounding of icebergs can create enormous sound energy, scientists say. Now a new study has found that the mere drifting of an iceberg from near Antarctica to warmer ocean waters produces startling levels of noise.

The Oregon State University July 10, 2013 news release, which originated the news item, provides more detail (Note: A link has been removed),

A team led by Oregon State University (OSU) researchers used an array of hydrophones to track the sound produced by an iceberg through its life cycle, from its origin in the Weddell Sea to its eventual demise in the open ocean. The goal of the project was to measure baseline levels of this kind of naturally occurring sound in the ocean, so it can be compared to anthropogenic noises.

“During one hour-long period, we documented that the sound energy released by the iceberg disintegrating was equivalent to the sound that would be created by a few hundred supertankers over the same period,” said Robert Dziak, a marine geologist at OSU’s Hatfield Marine Science Center in Newport, Ore., and lead author on the study. [emphasis mine]

“This wasn’t from the iceberg scraping the bottom,” he added. “It was from its rapid disintegration as the berg melted and broke apart. We call the sounds ‘icequakes’ because the process and ensuing sounds are much like those produced by earthquakes.”

I encourage anyone who’s interested to read the entire news release (apparently the researchers were getting images of their iceberg from the International Space Station) and/or the team’s published research paper,

Robert P. Dziak, Matthew J. Fowler, Haruyoshi Matsumoto, DelWayne R. Bohnenstiehl, Minkyu Park, Kyle Warren, and Won Sang Lee. 2013. Life and death sounds of Iceberg A53a. Oceanography 26(2), http://dx.doi.org/10.5670/oceanog.2013.20.

EVP (electronic voice phenomena), recording the dead, visual art, and the Rorschach Audio research project (2007 – 2012): two talks

The British Library Sound Archive (London, England) is featuring a June 28, 2013 lunchtime talk (Note: It is free and sold out as of June 24, 2013 2:30 pm PDT) according to a June 23, 2013 Disinformation PR (public relations) announcement (from the June 4, 2013 Rorschach Audio blog posting, which originated the announcement),

Writing in “Playback: The Bulletin of the British Library Sound Archive”, Toby Oakes observed that the archive “deals with the voices of the dead every day, but our subjects tend to have been alive at the time of recording”. “Mortality was no impediment” however, in the case of tapes recorded by parapsychologist Konstantin Raudive, who claimed that Galileo, Goethe and Hitler communicated with him through the medium of radio. Raudive was the most famous exponent of Electronic Voice Phenomena (EVP), as it is known, and the British Library holds a collection of 60 of his unedited tapes. Rather than dismissing the claims of EVP researchers out-of-hand, author Joe Banks demonstrates a number of highly entertaining audio-visual illusions, which show how the mind can misinterpret recordings of sound and of stray communications chatter, in a similar way to how viewers project imaginary images onto the random visual forms of the psychiatrist Hermann Rorschach’s famous ink-blot tests. The talk stresses the important role that intelligent guesswork plays in normal perception, and discusses descriptions of sound phenomena by Leonardo da Vinci, and the work of the BBC Monitoring Service, emphasizing the influence that wartime intelligence work with sound had on one of the most important works of visual arts theory every published. [sic]

[from the British Library Rorschach Audio event page: The talk stresses the important role that intelligent guesswork plays in normal perception, and discusses descriptions of sound phenomena by Leonardo da Vinci, and the work of the BBC Monitoring Service, emphasizing the influence that wartime intelligence work with sound had on one of the most important works of visual arts theory – Art & Illusion by (wartime radio monitor and post-war art historian) E.H. Gombrich.]

The talk starts at 12:30 (however the library is a bit of a labyrinth so arrive 10 minutes early to make sure you find the scriptorium on time). Admission is free and refreshments are provided. To attend please e-mail your name to summer-scholars@bl.uk.

Rorschach Audio – Ghost Voices, Art, Illusions and Sonic Archives” [emphasis mine]
12:30 lunch-time, 28 June 2013
The British Library
96 Euston Road
London NW1 2DB

This talk is part of the British Library’s Summer Scholars programme –
http://www.bl.uk/whatson/events/event147624.html

There is a second chance at finding out about this project at a Café Scientifique in Leamington Spa, from the June 4, 2013 posting,

After the British Library talk, the next “Rorschach Audio” demonstration will be for Café Scientifique in Leamington Spa – upstairs at St Patrick’s Irish Club, Riverside Walk (off Adelaide Road), Leamington CV32 5AH, 7pm, Monday 15 July 2013…

http://www.cafescientifique.org/index.php?option=com_content&view=article&id=221:leamington-spa

I like the Café Scientifique in Leamington Spa description of the July 15, 2013 event,

Monday 15th July 2013

Electronic Voice Phenomena: ghost voice recordings and illusions of science

Joe Banks

Electronic Voice Phenomena (EVP) refers to a movement – not unlike the UFO scene – whose supporters contend that misheard recordings of stray communications and radio chatter constitute scientific proof of the existence of ghosts. Rather than dismissing ghost-voice recordings out of hand, Joe will show how EVP researchers misunderstand the mind’s capacity to interpret sound, similar to the way we see illusory images in the random visual forms of the famous Rorschach ink-blot tests. Joe will demonstrate the formation of such perceptions using a number of entertaining and sometimes bizarre audio-visual illusions.

Joe Banks is a former Honorary Visiting Fellow in the School of Informatics at City University, London, and former AHRC-sponsored Research Fellow in the Department of Computing at Goldsmiths College and the Department of English, Linguistics & Cultural Studies at The University of Westminster. One of his “Rorschach Audio” research papers was published in a scholarly journal by The MIT Press, his recently-published book Rorschach Audio was featured on BBC Radio 4 and he has given talks about “Rorschach Audio” at the London Science Museum’s Dana Centre and the British Library.

Joe lives in London, near the set of traffic lights that inspired physicist Leo Szilard to conceive the theory of the thermonuclear chain reaction.

For those of us who can’t get to the British Library or Leamington Spa, here’s a video featuring the Rorschach Audio project, from Joe Banks’s webpage on the Goldsmiths College website,


I think it might be necessary to attend the talk in order to make sense of this video although perhaps you’ll find this image included with the publicity helpful,

Rorschach Audio visual image [downloaded from http://rorschachaudio.wordpress.com/2013/06/04/british-library-sonic-archives/]

Rorschach Audio visual image [downloaded from http://rorschachaudio.wordpress.com/2013/06/04/british-library-sonic-archives/]

Dem bones, dem bones, dem dry bones

Making sounds with bones—but not as you might imagine.

Image from slideshow of Transjuicer exhibit in Science Gallery, Dublin, 2011 and John Curtin Gallery, Perth 2010

Christopher Mims in his Dec. 27, 2011 (?) article for Fast Company explains what artist Boo Chapple is doing with her Transjuicer installation of speakers made from bone tissue,

Turned on its head, bone’s response to physical stress can be used to produce music—or at least musical tones. That’s what artist Boo Chapple discovered during the course of a year-long collaboration at the University of Western Australia’s SymbioticA lab, the only research facility in the world devoted to providing access to wet labs to artists and artistically minded researchers.

When Chapple began this project, she knew that extensive scientific literature suggested bone had what are known as piezoelectric properties. Basically, when a piezoelectric material is bent, compressed, or otherwise physically stressed, it generates an electric charge. Conversely, applying an electric charge to a piezoelectric material can change its shape. This has made piezoelectrics the backbone of countless environmental sensors and tiny actuators.

Poring through this literature, Chapple realized that applying a current to bone at just the right frequency should make it vibrate like the diaphragm in an audio speaker. And because bone retains its piezoelectric properties even when it’s no longer living, it should be fairly straightforward to transform any old bone into the world’s most outre audio component.

Because Chapple is an artist and not a technologist, her goal wasn’t to pursue this technique until it yielded a new product. Rather, the point was to accomplish what all good art can: “making strange” otherwise familiar objects.

I first heard about the SymbioticA lab when they showed their Fish & Chips project (the report I’ve linked to is undated) at the 2001 Ars Electronic annual event in Linz, Austria. I never did get to see the performance (fish neurons grown on silicon chips and hooked up to software and musical instruments) but their work remains a source of great interest to me. (I last mentioned SymbioticA in my July 5, 2011 posting where they were scheduled for the same session that I was, at the 2011 ISEA conference in Istanbul.)

Here’s a bit more about the SymbioticA lab at the University of Western Australia (from their home page),

SymbioticA is a research facility dedicated to artistic inquiry into knowledge and technology in the life sciences.

Our research embodies:

  • identifying and developing new materials and subjects for artistic manipulation
  • researching strategies and implications of presenting living-art in different contexts
  • developing technologies and protocols as artistic tool kits.

Having access to scientific laboratories and tools, SymbioticA is in a unique position to offer these resources for artistic research. Therefore, SymbioticA encourages and favours research projects that involve hands on development of technical skills and the use of scientific tools.

The research undertaken at SymbioticA is speculative in nature. SymbioticA strives to support non-utilitarian, curiosity based and philosophically motivated research.

Boo Chapple, a resident at the SymbioticA Lab, had this to say about her installation, Transjuicer, and science when it was at Dublin’s Science Gallery (excerpted from the Visceral Interview),

Do you think that work like yours helps to open up science to public discussion and debate; and does this interest you?

I’m not sure that Transjuicer is so much about science as it is about belief, the economy of human-animal relations, and the politics of material transformation. These are all things that are inherent to the practice of science but perhaps not what one might think of when one thinks of public debate around particular scientific discoveries, or technologies.

While I am interested in the philosophical parameters of these debates, I do not see my art practice as an instrument of communication in this respect, nor is Transjuicer engaged with any hot topics of the moment, or designed in such a way as to reveal the technical processes that were employed in making the bone audio speakers.

The work being done at the SymbioticA lab is provocative in the best sense, i.e., meant to provoke thought and discussion.