Tag Archives: acoustics

When the rocks sing “I got rhythm”

George Gershwin, along with his brother Ira, wrote jazz standards such as “I got rhythm” in 1930 and, before that, “Fascinating rhythm” in 1924 and both seem à propos in relation to this October 9, 2023 news item on phys.org,

f you could sink through the Earth’s crust, you might hear, with a carefully tuned ear, a cacophany of booms and crackles along the way. The fissures, pores, and defects running through rocks are like strings that resonate when pressed and stressed. And as a team of MIT geologists has found, the rhythm and pace of these sounds can tell you something about the depth and strength of the rocks around you.

The fissures and pores running through rocks, from the Earth’s crust to the liquid mantle, are like channels and cavities through which sound can resonate. Credit: iStock [downloaded from https://news.mit.edu/2023/boom-crackle-pop-earth-crust-sounds-1009]

An October 9, 2023 Massachusetts Institute of Technology news release (also on EurekAlert) by Jennifer Chu, which originated the news item, (word play alert) delves down into the material, Note: A link has been removed,

“If you were listening to the rocks, they would be singing at higher and higher pitches, the deeper you go,” says MIT geologist Matěj Peč. 

Peč and his colleagues are listening to rocks, to see whether any acoustic patterns, or “fingerprints” emerge when subjected to various pressures. In lab studies, they have now shown that samples of marble, when subjected to low pressures, emit low-pitched “booms,” while at higher pressures, the rocks generate an ‘avalanche’ of higher-pitched crackles. 

Peč says these acoustic patterns in rocks can help scientists estimate the types of cracks, fissures, and other defects that the Earth’s crust experiences with depth, which they can then use to identify unstable regions below the surface, where there is potential for earthquakes or eruptions. The team’s results, published in the Proceedings of the National Academy of Sciences, could also help inform surveyors’ efforts to drill for renewable, geothermal energy. 

“If we want to tap these very hot geothermal sources, we will have to learn how to drill into rocks that are in this mixed-mode condition, where they are not purely brittle, but also flow a bit,” says Peč, who is an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But overall, this is fundamental science that can help us understand where the lithosphere is strongest.” 

Peč’s collaborators at MIT are lead author and research scientist Hoagy O. Ghaffari, technical associate Ulrich Mok, graduate student Hilary Chang, and professor emeritus of geophysics Brian Evans. Tushar Mittal, co-author and former EAPS postdoc, is now an assistant professor at Penn State University.

Fracture and flow

The Earth’s crust is often compared to the skin of an apple. At its thickest, the crust can be 70 kilometers deep — a tiny fraction of the globe’s total, 12,700-kilometer diameter. And yet, the rocks that make up the planet’s thin peel vary greatly in their strength and stability. Geologists infer that rocks near the surface are brittle and fracture easily, compared to rocks at greater depths, where immense pressures, and heat from the core, can make rocks flow. 

The fact that rocks are brittle at the surface and more ductile at depth implies there must be an in-between — a phase in which rocks transition from one to the other, and may have properties of both, able to fracture like granite, and flow like honey. This “brittle-to-ductile transition” is not well understood, though geologists believe it may be where rocks are at their strongest within the crust. 

“This transition state of partly flowing, partly fracturing, is really important, because that’s where we think the peak of the lithosphere’s strength is and where the largest earthquakes nucleate,” Peč says. “But we don’t have a good handle on this type of mixed-mode behavior.”

He and his colleagues are studying how the strength and stability of rocks — whether brittle, ductile, or somewhere in between — varies, based on a rock’s microscopic defects. The size, density, and distribution of defects such as microscopic cracks, fissures, and pores can shape how brittle or ductile a rock can be. 

But measuring the microscopic defects in rocks, under conditions that simulate the Earth’s various pressures and depths, is no trivial task. There is, for instance, no visual-imaging technique that allows scientists to see inside rocks to map their microscopic imperfections. So the team turned to ultrasound, and the idea that, any sound wave traveling through a rock should bounce, vibrate, and reflect off any microscopic cracks and crevices, in specific ways that should reveal something about the pattern of those defects. 

All these defects will also generate their own sounds when they move under stress and therefore both actively sounding through the rock as well as listening to it should give them a great deal of information. They found that the idea should work with ultrasound waves, at megahertz frequencies.

This kind of ultrasound method is analogous to what seismologists do in nature, but at much higher frequencies,” Peč explains. “This helps us to understand the physics that occur at microscopic scales, during the deformation of these rocks.” 

A rock in a hard place

In their experiments, the team tested cylinders of Carrara marble. 

“It’s the same material as what Michaelangelo’s David is made from,” Peč notes. “It’s a very well-characterized material, and we know exactly what it should be doing.”

The team placed each marble cylinder in a a vice-like apparatus made from pistons of aluminum, zirconium, and steel, which together can generate extreme stresses. They placed the vice in a pressurized chamber, then subjected each cylinder to pressures similar to what rocks experience throughout the Earth’s crust.  

As they slowly crushed each rock, the team sent pulses of ultrasound through the top of the sample, and recorded the acoustic pattern that exited through the bottom. When the sensors were not pulsing, they were listening to any naturally occurring acoustic emissions.

They found that at the lower end of the pressure range, where rocks are brittle, the marble indeed formed sudden fractures in response, and the sound waves resembled large, low-frequency booms. At the highest pressures, where rocks are more ductile, the acoustic waves resembled a higher-pitched crackling. The team believes this crackling was produced by microscopic defects called dislocations that then spread and flow like an avalanche. 

“For the first time, we have recorded the ‘noises’ that rocks make when they are deformed across this brittle-to-ductile transition, and we link these noises to the individual microscopic defects that cause them,” Peč says. “We found that these defects massively change their size and propagation velocity as they cross this transition. It’s more complicated than people had thought.”

The team’s characterizations of rocks and their defects at various pressures can help scientists estimate how the Earth’s crust will behave at various depths, such as how rocks might fracture in an earthquake, or flow in an eruption.    

“When rocks are partly fracturing and partly flowing, how does that feed back into the earthquake cycle? And how does that affect the movement of magma through a network of rocks?” Peč says. “Those are larger scale questions that can be tackled with research like this.”

This research was supported, in part, by the National Science Foundation.

Here’s a link to and a citation for the paper,

Microscopic defect dynamics during a brittle-to-ductile transition by Hoagy O’Ghaffari, Matěj Peč, Tushar Mittal, Ulrich Mok, Hilary Chang, and Brian Evans. Proceedings of the National Academy of Sciences 120 (42) e2305667120 DOI: https://doi.org/10.1073/pnas.2305667120 October 9, 2023

This paper is behind a paywall.

Insect-inspired microphones

I was hoping that there would be some insect audio files but this research is more about their role as inspiration for a new type of microphone than the sounds they make themselves. From a May 10, 2023 Acoustical Society of America news release (also on EurekAlert),

What can an insect hear? Surprisingly, quite a lot. Though small and simple, their hearing systems are highly efficient. For example, with a membrane only 2 millimeters across, the desert locust can decompose frequencies comparable to human capability. By understanding how insects perceive sound and using 3D-printing technology to create custom materials, it is possible to develop miniature, bio-inspired microphones.

The displacement of the wax moth Acroia grisella membrane, which is one of the key sources of inspiration for designing miniature, bio-inspired microphones. Credit: Andrew Reid

Andrew Reid of the University of Strathclyde in the U.K. will present his work creating such microphones, which can autonomously collect acoustic data with little power consumption. His presentation, “Unnatural hearing — 3D printing functional polymers as a path to bio-inspired microphone design,” will take place Wednesday, May 10 [2023], at 10:05 a.m. Eastern U.S. in the Northwestern/Ohio State room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

“Insect ears are ideal templates for lowering energy and data transmission costs, reducing the size of the sensors, and removing data processing,” said Reid.

Reid’s team takes inspiration from insect ears in multiple ways. On the chemical and structural level, the researchers use 3D-printing technology to fabricate custom materials that mimic insect membranes. These synthetic membranes are highly sensitive and efficient acoustic sensors. Without 3D printing, traditional, silicon-based attempts at bio-inspired microphones lack the flexibility and customization required.

“In images, our microphone looks like any other microphone. The mechanical element is a simple diaphragm, perhaps in a slightly unusual ellipsoid or rectangular shape,” Reid said. “The interesting bits are happening on the microscale, with small variations in thickness and porosity, and on the nanoscale, with variations in material properties such as the compliance and density of the material.”

More than just the material, the entire data collection process is inspired by biological systems. Unlike traditional microphones that collect a range of information, these microphones are designed to detect a specific signal. This streamlined process is similar to how nerve endings detect and transmit signals. The specialization of the sensor enables it to quickly discern triggers without consuming a lot of energy or requiring supervision.

The bio-inspired sensors, with their small size, autonomous function, and low energy consumption, are ideal for applications that are hazardous or hard to reach, including locations embedded in a structure or within the human body.

Bio-inspired 3D-printing techniques can be applied to solve many other challenges, including working on blood-brain barrier organoids or ultrasound structural monitoring.

Here’s a link to and a citation for the paper,

Unnatural hearing—3D printing functional polymers as a path to bio-inspired microphone design by Andrew Reid. J Acoust Soc Am 153, A195 (2023) or JASA (Journal of the Acoustical Sociey of America) Volume 153, Issue 3_supplement March 2023 DOI: https://doi.org/10.1121/10.0018636

You will find the abstract but I wish you good luck with finding the paper online; I wasn’t able and am guessing it’s available on paper only.

A fish baying at the moon?

It seems to be GLUBS time again (GLUBS being the Global Library of Underwater Biological Sounds). In fact it’s an altogether acoustical time for the ocean. First, a mystery fish,

That sounds a bit like a trumpet to me. (I last wrote about GLUBS in a March 4, 2022 posting where it was included under the ‘Marine sound libraries’ subhead.)

The latest about GLUBS and aquatic sounds can be found in an April 26, 2023 Rockefeller University news release on EurekAlert, Note 1: I don’t usually include the heads but I quite like this one and even stole part of it for this posting; Note 2: There probably should have been more than one news release; Note 3: For anyone who doesn’t have time to read the entire news release, I have a link immediately following the news release to an informative and brief article about the work,

Do fish bay at the moon? Can their odd songs identify Hawaiian mystery fish? Eavesdropping scientists progress in recording, understanding ocean soundscapes

Using hydrophones to eavesdrop on a reef off the coast of Goa, India, researchers have helped advance a new low-cost way to monitor changes in the world’s murky marine environments.

Reporting their results in the Journal of the Acoustical Society of America (JASA), the scientists recorded the duration and timing of mating and feeding sounds – songs, croaks, trumpets and drums – of 21 of the world’s noise-making ocean species.

With artificial intelligence and other pioneering techniques to discern the calls of marine life, they recorded and identified

Some species within the underwater community work the early shift and ruckus from 3 am to 1.45 pm, others work the late shift and ruckus from 2 pm to 2.45 am, while the plankton predators were “strongly influenced by the moon.”

Also registered: the degree of difference in the abundance of marine life before and after a monsoon.

The paper concludes that hydrophones are a powerful tool and “overall classification performance (89%) is helpful in the real-time monitoring of the fish stocks in the ecosystem.”

The team, including Bishwajit Chakraborty, a leader of the International Quiet Ocean Experiment (IQOE), benefitted from archived recordings of marine species against which they could match what they heard, including:

Also captured was a “buzz” call of unknown origin (https://bit.ly/3GZdRSI), one of the oceans’ countless marine life mysteries.

With a contribution to the International Quiet Ocean Experiment, the research will be discussed at an IQOE meeting in Woods Hole, MA, USA, 26-27 April [2023].

Advancing the Global Library of Underwater Biological Sounds (GLUBS)

That event will be followed April 28-29 by a meeting of partners in the new Global Library of Underwater Biological Sounds (GLUBS), a major legacy of the decade-long IQOE, ending in 2025.

GLUBS, conceived in late 2021 and currently under development, is designed as an open-access online platform to help collate global information and to broaden and standardize scientific and community knowledge of underwater soundscapes and their contributing sources.

It will help build short snippets and snapshots (minutes, hours, days long recordings) of biological, anthropogenic, and geophysical marine sounds into full-scale, tell-tale underwater baseline soundscapes.

Especially notable among many applications of insights from GLUBS information: the ability to detect in hard-to-see underwater environments and habitats how the distribution and behavior of marine life responds to increasing pressure from climate change, fishing, resource development, plastic, anthropogenic noise and other pollutants.

“Passive acoustic monitoring (PAM) is an effective technique for sampling aquatic systems that is particularly useful in deep, dark, turbid, and rapidly changing or remote locations,” says Miles Parsons of the Australian Institute of Marine Science and a leader of GLUBS.

He and colleagues outline two primary targets:

  • Produce and maintain a list of all aquatic species confirmed or anticipated to produce sound underwater;
  • Promote the reporting of sounds from unknown sources

Odd songs of Hawaii’s mystery fish

In this latter pursuit, GLUBS will also help reveal species unknown to science as yet and contribute to their eventual identification.

For example, newly added to the growing global collection of marine sounds are recent recordings from Hawaii, featuring the baffling

now part of an entire YouTube channel (https://bit.ly/3H5Ly54) dedicated to marine life sounds in Hawaii and elsewhere (e.g. this “complete and total mystery from the Florida Keys”: https://bit.ly/41w1Xbc (Annie Innes-Gold, Hawai’i Institute of Marine Biology; processed by Jill Munger, Conservation Metrics, Inc.)

Says Dr. Parsons: “Unidentified sounds can provide valuable information on the richness of the soundscape, the acoustic communities that contribute to it and behavioral interactions among acoustic groups. However, unknown, cryptic and rare sounds are rarely target signals for research and monitoring projects and are, therefore, largely unreported.”

The many uses of underwater sound

Of the roughly 250,000 known marine species, scientists think all fully-aquatic marine mammals (~146, including sub-species) emit sounds, along with at least 100 invertebrates, 1,000 of the world’s ~35,000 known fish species, and likely many thousands more.

GLUBS aims to help delineate essential fish habitat and estimate biomass of a spawning aggregation of a commercially or recreationally important soniferous species.

In one scenario of its many uses, a one-year, calibrated recording can provide a proxy for the timing, location and, under certain circumstances, numbers of ‘calling’ fishes, and how these change throughout a spawning season.

It will also help evaluate the degradation and recovery of a coral reef.

GLUBS researchers envision, for example, collecting recordings from a coral reef that experienced a cyclone or other extreme weather event, followed by widespread bleaching. Throughout its restoration, GLUBS audio data would be matched with and augment a visual census of the fish assemblage at multiple timepoints.

Oil and gas, wind power and other offshore industries will also benefit from GLUBS’ timely information on the possible harms or benefits of their activities.

Other IQOE legacies include

  • Manta (bitbucket.org/CLO-BRP/manta-wiki/wiki/Home), a mechanism created by world experts from academia, industry, and government to help standardize ocean sound recording data, facilitating its comparability, pooling and visualization.
  • OPUS, an Open Portal to Underwater Sound being tested at Alfred Wegener Institute in Bremerhaven, Germany to promote the use of acoustic data collected worldwide, providing easy access to MANTA-processed data, and
  • The first comprehensive database and map of the world’s 200+ known hydrophones recording for ecological purposes 

Marine sounds and COVID-19

The IQOE’s early ambition of humanity’s maritime noise being minimized for a day or week was unexpectedly met in spades when the COVID-19 pandemic began.     

New IQOE research to be considered at the April meeting includes a paper, Impact of the COVID‑19 pandemic on levels of deep‑ocean acoustic noise (https://bit.ly/3KZTaIt) documenting a pandemic-related drop of 1 to 3 dB even in the depths of the abyss. With a 3 dB decrease, sound energy is halved.

Virus control measures led to “sudden and sometimes dramatic reductions in human activity in sectors such as transport, industry, energy, tourism, and construction,” with some of the greatest reductions from March to June 2020 – a drop of up to 13% in container ship traffic and up to 42% in passenger ships.

Other IQOE accomplishments include achieving recognition of ocean sound as an Essential Ocean Variable (EOV) within the Global Ocean Observing System, underlining its helpfulness in monitoring 

  • climate change (the extent and breakup of sea ice; the frequency and intensity of wind, waves and rain)
  • ocean health (biodiversity assessments: monitoring the distribution and abundance of sound-producing species)
  • impacts of human activities on wildlife, and
  • nuclear explosions, foreign/illegal/threatening vessels, human activities in protected areas, and underwater earthquakes that can generate tsunamis

The Partnership for Observation of the Global Ocean (POGO) funded an IQOE Working Group in 2016, which quickly identified the lack of ocean sound as a variable measured by ocean observing systems. This group developed specifications for an Ocean Sound Essential Ocean Variable (EOV) by 2018, which was approved by the Global Ocean Observing System in 2021. IQOE has since developed the Ocean Sound EOV Implementation Plan, reviewed in 2022 and ready for public debut at IQOE’s meeting April 26.

One of IQOE’s originators, Jesse Ausubel of The Rockefeller University’s Programme for the Human Environment, says the programme has drawn attention to the absence of publicly available time series of sound on ecologically important frequencies throughout the global ocean.

“We need to listen more in the blue symphony halls. Animal sounds are behavior, and we need to record and understand the sounds, if we want to know the status of ocean life,” he says.

The program “has provided a platform for the international passive acoustics community to grow stronger and advocate for inclusion of acoustic measurements in national, regional, and global ocean observing systems,” says Prof. Peter Tyack of the University of St. Andrew’s, who, with Steven Simpson, guide the IQOE International Scientific Steering Committee.

“The ocean acoustics and bioacoustics communities had no experience in working together globally, and coverage is certainly not global; there are many gaps. IQOE has begun to help these communities work together globally, and there is still progress to be made in networking and in expanding the deployment of hydrophones, adds Prof. Ausubel.

A description of the project’s history and evaluation to date is available at https://bit.ly/3H7FCbN.

Encouraging greater worldwide use of hydrophones

According to Dr. Parsons, “hydrophones are now being deployed in more locations, more often, by more people, than ever before,” 

To celebrate that, and to mark World Oceans Day, June 8 [2023], GLUBS recently put out a call to hydrophone operators to share marine life recordings made from 7 to 9 June, so far receiving interest from 124 hydrophone operators in 62 organizations from 29 countries and counting. The hydrophones will be retrieved over the following months with the full dataset expected sometime in 2024.

They also plan to make World Oceans Passive Acoustic Monitoring (WOPAM) Day an annual event – a global collaborative study of aquatic soundscapes, salt, brackish or freshwater – the marine world’s answer to the U.S. Audubon Society’s 123-year-old Christmas Bird Count.

Interested researchers with hydrophones [emphasis mine] already planned [sic] to be in the water on June 8 [2023] are invited to contact Miles Parsons (m.parsons@aims.gov.au) or Steve Simpson (s.simpson@bristol.ac.uk).

Becky Ferreira has written April 26, 2023 article for Motherboard that provides more insight into the work being done offshore in Goa and elsewhere,

To better understand the rich reef ecosystems of Goa, a team of researchers at the Indian Council of Scientific and Industrial Research’s National Institute of Oceanography (CSIR-NIO) placed a hydrophone near Grande Island at a depth of about 65 feet. Over the course of several days, the instrument captured hundreds of recordings of the choruses of “soniferous” (sound-making)fish, the high-frequency noises of shrimp, and the rumblings of boats passing near the area.

“Our research, for the longest time, predominantly involved active acoustics systems in understanding habitats (bottom roughness, etc., using multibeam sonar),” said Bishwajit Chakraborty, a marine scientist at CSIR-NIO who co-authored the study, in an email to Motherboard. “By using active sonar systems, we add sound signals to water media which severely affects marine life.” 

Here’s a link to and a citation for the paper mentioned at the beginning of the news release,

Biodiversity assessment using passive acoustic recordings from off-reef location—Unsupervised learning to classify fish vocalization by Vasudev P. Mahale, Kranthikumar Chanda, Bishwajit Chakraborty; Tejas Salkar, G. B. Sreekanth. Journal of the Acoustical Society of America, Volume 153, Issue 3 March 2023 [alternate: J Acoust Soc Am 153, 1534–1553 (2023)] DOI: https://doi.org/10.1121/10.0017248

This paper appears to be open access.

And, one more time,

Interested researchers with hydrophones [emphasis mine] already planned [sic] to be in the water on June 8 [2023] are invited to contact Miles Parsons (m.parsons@aims.gov.au) or Steve Simpson (s.simpson@bristol.ac.uk).

It’s not just the sound, it’s the vibration too (a red-eyed treefrog calls for a mate)

This is an exceptionally pretty image of a frog that sometimes seems to be everywhere,,

Credit: Pixabay/CC0 Public Domain [downloaded from https://phys.org/news/2022-09-red-eyed-treefrogs-vibration-aggression.html]

I usually try to include one or two postings a year about frogs on this blog in honour of its name. The first one in 2022 was titled, “Got a photo of a frog being bitten by flies? There’s a research study …” (a June 24, 2022 post).

This year (2023; I’m a bit late), I have a September 14, 2022 news item on phys.org focused on mating calls, aggression, and vibrations,

One would be hard-pressed to take a walk outside without hearing the sounds of calling animals. During the day, birds chatter back and forth, and as night falls, frogs and insects call to defend territories and to attract potential mates. For several decades, biologists have studied these calls with great interest, taking away major lessons about the evolution of animal displays and the processes of speciation. But there may be a lot more to animal calls than we have realized.

A new study appearing in the Journal of Experimental Biology by Dr. Michael Caldwell and student researchers at Gettysburg College demonstrates that the calls of red-eyed treefrogs don’t just send sounds through the air, but also send vibrations through the plants. What’s more, these plant vibrations change the message that other frogs receive in major ways. The researchers played sound and vibrations produced by calling males to other red-eyed treefrogs surrounding a rainforest pond in Panama. They found that female frogs are over twice as likely to choose the calls of a potential mate if those calls include both sound and vibrations, and male frogs are far more aggressive and show a greater range of aggressive displays when they can feel the vibrations generated by the calls of their rivals.

“This really changes how we look at things,” says Caldwell. “If we want to know how a call functions, we can’t just look at the sound it makes anymore. We need to at least consider the roles that its associated vibrations play in getting the message across.”

A September 14, 2022 Gettysburg College news release, which originated the news item, delves further into vibrations,

Because vibrations are unavoidably excited in any surface a calling animal is touching, the authors of the new study suggest it is likely that many more species communicate using similar ‘bimodal acoustic calls’ that function simultaneously through both airborne sound and plant-, ground-, or water-borne vibrations. “There is zero reason to suspect that bimodal acoustic calls are limited to red-eyed treefrogs. In fact, we know they aren’t,” says Caldwell, who points out that researchers at UCLA [University of California at Los Angeles] and the University of Texas are reporting similar results with distantly related frog species, and that elephants and several species of insect have been shown to communicate this way. “For decades,” says Caldwell, “..we just didn’t know what to look for, but with a growing scientific interest in vibrational communication, all of that is rapidly changing.”

This new focus on animal calls as functioning through both sound and vibration could set the stage for major advances in the study of signal evolution. One potential implication highlighted by the team at Gettysburg College is that “we may even learn new things about sound signals we thought we understood.” This is because both the sound and the vibrational components of bimodal acoustic signals are generated together by the same organs. So, selection acting either call component will also necessarily shape the evolution of the other. 

The red-eyed treefrog is one of the most photographed species on the planet, which makes these findings all the more unexpected. “It just goes to show, we still have a lot to learn about animal behavior,” reports Dr. Caldwell. “We hear animal calls so often that we tune most of them out, but when we make a point to look at the world from the perspective of a frog, species that are far more sensitive to vibrations than humans, it quickly becomes clear that we have been overlooking a major part of what they are saying to one another.”

This research was performed at the Smithsonian Tropical Research Institute and Gettysburg College, with funding from the Smithsonian Institution and the Cross-disciplinary Science Institute at Gettysburg College.

Here’s a link to and a citation for the paper,

Beyond sound: bimodal acoustic calls used in mate-choice and aggression by red-eyed treefrogs by Michael S. Caldwell, Kayla A. Britt, Lilianna C. Mischke, Hannah I. Collins. Journal of Experimental Biology Volume 225, Issue 16 August 2022 DOI:
https://doi.org/10.1242/jeb.244460 Published August 25, 2022

This paper is behind a paywall. But, researchers have made some video clips available for viewing,

Spiders can outsource hearing to their webs

A March 29, 2022 news item on ScienceDaily highlights research into how spiders hear,

Everyone knows that humans and most other vertebrate species hear using eardrums that turn soundwave pressure into signals for our brains. But what about smaller animals like insects and arthropods? Can they detect sounds? And if so, how?

Distinguished Professor Ron Miles, a Department of Mechanical Engineering faculty member at Binghamton University’s Thomas J. Watson College of Engineering and Applied Science, has been exploring that question for more than three decades, in a quest to revolutionize microphone technology.

A newly published study of orb-weaving spiders — the species featured in the classic children’s book “Charlotte’s Web” — has yielded some extraordinary results: The spiders are using their webs as extended auditory arrays to capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

Binghamton University (formal name: State University of New York at Binghamton) has made this fascinating (to me anyway) video available,

Binghamton University and Cornell University (also in New York state) researchers worked collaboratively on this project. Consequently, there are two news releases and there is some redundancy but I always find that information repeated in different ways is helpful for learning.

A March 29, 2022 Binghamton University news release (also on EurekAlert) by Chris Kocher gives more detail about the work (Note: Links have been removed),

It is well-known that spiders respond when something vibrates their webs, such as potential prey. In these new experiments, researchers for the first time show that spiders turned, crouched or flattened out in response to sounds in the air.

The study is the latest collaboration between Miles and Ron Hoy, a biology professor from Cornell, and it has implications for designing extremely sensitive bio-inspired microphones for use in hearing aids and cell phone

Jian Zhou, who earned his PhD in Miles’ lab and is doing postdoctoral research at the Argonne National Laboratory, and Junpeng Lai, a current PhD student in Miles’ lab, are co-first authors. Miles, Hoy and Associate Professor Carol I. Miles from the Harpur College of Arts and Sciences’ Department of Biological Sciences at Binghamton are also authors for this study. Grants from the National Institutes of Health to Ron Miles funded the research.

A single strand of spider silk is so thin and sensitive that it can detect the movement of vibrating air particles that make up a soundwave, which is different from how eardrums work. Ron Miles’ previous research has led to the invention of novel microphone designs that are based on hearing in insects.

“The spider is really a natural demonstration that this is a viable way to sense sound using viscous forces in the air on thin fibers,” he said. “If it works in nature, maybe we should have a closer look at it.”

Spiders can detect miniscule movements and vibrations through sensory organs on their tarsal claws at the tips of their legs, which they use to grasp their webs. Orb-weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used Binghamton University’s anechoic chamber, a completely soundproof room under the Innovative Technologies Complex. Collecting orb-weavers from windows around campus, they had the spiders spin a web inside a rectangular frame so they could position it where they wanted.

The team began by using pure tone sound 3 meters away at different sound levels to see if the spiders responded or not. Surprisingly, they found spiders can respond to sound levels as low as 68 decibels. For louder sound, they found even more types of behaviors.

They then placed the sound source at a 45-degree angle, to see if the spiders behaved differently. They found that not only are the spiders localizing the sound source, but they can tell the sound incoming direction with 100% accuracy.

To better understand the spider-hearing mechanism, the researchers used laser vibrometry and measured over one thousand locations on a natural spider web, with the spider sitting in the center under the sound field. The result showed that the web moves with sound almost at maximum physical efficiency across an ultra-wide frequency range.

“Of course, the real question is, if the web is moving like that, does the spider hear using it?” Miles said. “That’s a hard question to answer.”

Lai added: “There could even be a hidden ear within the spider body that we don’t know about.”

So the team placed a mini-speaker 5 centimeters away from the center of the web where the spider sits, and 2 millimeters away from the web plane — close but not touching the web. This allows the sound to travel to the spider both through air and through the web. The researchers found that the soundwave from the mini-speaker died out significantly as it traveled through the air, but it propagated readily through the web with little attenuation. The sound level was still at around 68 decibels when it reached the spider. The behavior data showed that four out of 12 spiders responded to this web-borne signal.

Those reactions proved that the spiders could hear through the webs, and Lai was thrilled when that happened: “I’ve been working on this research for five years. That’s a long time, and it’s great to see all these efforts will become something that everybody can read.”

The researchers also found that, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies. By using this external structure to hear, the spider could be able to customize it to hear different sorts of sounds.

Future experiments may investigate how spiders make use of the sound they can detect using their web. Additionally, the team would like to test whether other types of web-weaving spiders also use their silk to outsource their hearing.

“It’s reasonable to guess that a similar spider on a similar web would respond in a similar way,” Ron Miles said. “But we can’t draw any conclusions about that, since we tested a certain kind of spider that happens to be pretty common.”

Lai admitted he had no idea he would be working with spiders when he came to Binghamton as a mechanical engineering PhD student.

“I’ve been afraid of spiders all my life, because of their alien looks and hairy legs!” he said with a laugh. “But the more I worked with spiders, the more amazing I found them. I’m really starting to appreciate them.”

A March 29, 2022 Cornell University news release (also on EurekAlert but published March 30, 2022) by Krishna Ramanujan offers a somewhat different perspective on the work, Note: Links have been removed)

Charlotte’s web is made for more than just trapping prey.

A study of orb weaver spiders finds their massive webs also act as auditory arrays that capture sounds, possibly giving spiders advanced warning of incoming prey or predators.

In experiments, the researchers found the spiders turned, crouched or flattened out in response to sounds, behaviors that spiders have been known to exhibit when something vibrates their webs.

The paper, “Outsourced Hearing in an Orb-weaving Spider That Uses its Web as an Auditory Sensor,” published March 29 [2022] in the Proceedings of the National Academy of Sciences, provides the first behavioral evidence that a spider can outsource hearing to its web.

The findings have implications for designing bio-inspired extremely sensitive microphones for use in hearing aids and cell phones.

A single strand of spider silk is so thin and sensitive it can detect the movement of vibrating air particles that make up a sound wave. This is different from how ear drums work, by sensing pressure from sound waves; spider silk detects sound from nanoscale air particles that become excited from sound waves.

“The individual [silk] strands are so thin that they’re essentially wafting with the air itself, jostled around by the local air molecules,” said Ron Hoy, the Merksamer Professor of Biological Science, Emeritus, in the College of Arts and Sciences and one of the paper’s senior authors, along with Ronald Miles, professor of mechanical engineering at Binghamton University.

Spiders can detect miniscule movements and vibrations via sensory organs in their tarsi – claws at the tips of their legs they use to grasp their webs, Hoy said. Orb weaver spiders are known to make large webs, creating a kind of acoustic antennae with a sound-sensitive surface area that is up to 10,000 times greater than the spider itself.

In the study, the researchers used a special quiet room without vibrations or air flows at Binghamton University. They had an orb-weaver build a web inside a rectangular frame, so they could position it where they wanted. The team began by putting a mini-speaker within millimeters of the web without actually touching it, where sound operates as a mechanical vibration. They found the spider detected the mechanical vibration and moved in response.

They then placed a large speaker 3 meters away on the other side of the room from the frame with the web and spider, beyond the range where mechanical vibration could affect the web. A laser vibrometer was able to show the vibrations of the web from excited air particles.

The team then placed the speaker in different locations, to the right, left and center with respect to the frame. They found that the spider not only detected the sound, it turned in the direction of the speaker when it was moved. Also, it behaved differently based on the volume, by crouching or flattening out.

Future experiments may investigate whether spiders rebuild their webs, sometimes daily, in part to alter their acoustic capabilities, by varying a web’s geometry or where it is anchored. Also, by crouching and stretching, spiders may be changing the tension of the silk strands, thereby tuning them to pick up different frequencies, Hoy said.

Additionally, the team would like to test if other types of web-weaving spiders also use their silk to outsource their hearing. “The potential is there,” Hoy said.

Miles’ lab is using tiny fiber strands bio-inspired by spider silk to design highly sensitive microphones that – unlike conventional pressure-based microphones – pick up all frequencies and cancel out background noise, a boon for hearing aids.  

Here’s a link to and a citation for the paper,

Outsourced hearing in an orb-weaving spider that uses its web as an auditory sensor by Jian Zhou, Junpeng Lai, Gil Menda, Jay A. Stafstrom, Carol I. Miles, Ronald R. Hoy, and Ronald N. Miles. Proceedings of the National Academy of Sciences (PNAS) DOI: https://doi.org/10.1073/pnas.2122789119 Published March 29, 2022 | 119 (14) e2122789119

This paper appears to be open access and video/audio files are included (you can heat the sound and watch the spider respond).

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

Moths with sound absorption stealth technology

The cabbage tree emperor moth (Thomas Neil) [downloaded from https://www.cbc.ca/radio/quirks/nov-17-2018-greenland-asteroid-impact-short-people-in-the-rain-forest-reef-islands-and-sea-level-and-more-1.4906857/how-moths-evolved-a-kind-of-stealth-jet-technology-to-sneak-past-bats-1.4906866]

I don’t think I’ve ever seen a more gorgeous moth and it seems a perfect way to enter 2019, from a November 16, 2018 news item on CBC (Canadian Broadcasting Corporation),

A species of silk moth has evolved special sound absorbing scales on its wings to absorb the sonar pulses from hunting bats. This is analogous to the special coatings on stealth aircraft that allow them to be nearly invisible to radar.

“It’s a battle out there every night, insects flying for their lives trying to avoid becoming a bat’s next dinner,” said Dr. Marc Holderied, the senior author on the paper and an associate professor in the School of Biological Sciences at the University of Bristol.

“If you manage to absorb some of these sound energies, it would make you look smaller and let you be detectable over a shorter distance because echoe isn’t strong enough outside the detection bubble.”

Many moths have ears that warn them when a bat is nearby. But not the big and juicy cabbage tree emperor moths which would ordinarily make the perfect meal for bats.

The researchers prepared a brief animated feature illustrating the research,

Prior to publication of the study, the scientists made a presentation at the Acoustical Society of America’s 176th Meeting, held in conjunction with the Canadian Acoustical Association’s 2018 Acoustics Week, Nov. 5-9 at the Victoria Conference Centre in Victoria, Canada according to a November 7, 2018 University of Bristol press release (also on EurekAlert but submitted by the Acoustical Society of America on November 6, 2018),

Moths are a mainstay food source for bats, which use echolocation (biological sonar) to hunt their prey. Scientists such as Thomas Neil, from the University of Bristol in the U.K., are studying how moths have evolved passive defenses over millions of years to resist their primary predators.

While some moths have evolved ears that detect the ultrasonic calls of bats, many types of moths remain deaf. In those moths, Neil has found that the insects developed types of “stealth coating” that serve as acoustic camouflage to evade hungry bats.

Neil will describe his work during the Acoustical Society of America’s 176th Meeting, held in conjunction with the Canadian Acoustical Association’s 2018 Acoustics Week, Nov. 5-9 at the Victoria Conference Centre in Victoria, Canada.

In his presentation, Neil will focus on how fur on a moth’s thorax and wing joints provide acoustic stealth by reducing the echoes of these body parts from bat calls.

“Thoracic fur provides substantial acoustic stealth at all ecologically relevant ultrasonic frequencies,” said Neil, a researcher at Bristol University. “The thorax fur of moths acts as a lightweight porous sound absorber, facilitating acoustic camouflage and offering a significant survival advantage against bats.” Removing the fur from the moth’s thorax increased its detection risk by as much as 38 percent.

Neil used acoustic tomography to quantify echo strength in the spatial and frequency domains of two deaf moth species that are subject to bat predation and two butterfly species that are not.

In comparing the effects of removing thorax fur from insects that serve as food for bats to those that don’t, Neil’s research team found that thoracic fur determines acoustic camouflage of moths but not butterflies.

“We found that the fur on moths was both thicker and denser than that of the butterflies, and these parameters seem to be linked with the absorptive performance of their respective furs,” Neil said. “The thorax fur of the moths was able to absorb up to 85 percent of the impinging sound energy. The maximum absorption we found in butterflies was just 20 percent.”

Neil’s research could contribute to the development of biomimetic materials for ultrathin sound absorbers and other noise-control devices.

“Moth fur is thin and lightweight,” said Neil, “and acts as a broadband and multidirectional ultrasound absorber that is on par with the performance of current porous sound-absorbing foams.”

Moth fur? This has changed my view of moths although I reserve the right to get cranky when local moths chew through my wool sweaters. Here’s a link to and a citation for the paper,

Biomechanics of a moth scale at ultrasonic frequencies by Zhiyuan Shen, Thomas R. Neil, Daniel Robert, Bruce W. Drinkwater, and Marc W. Holderied. PNAS [Proccedings of the National Academy of Sciences of the United States of America] November 27, 2018 115 (48) 12200-12205; published ahead of print November 12, 2018 https://doi.org/10.1073/pnas.1810025115

This paper is behind a paywall.

Unusually I’m going to include the paper’s abstract here,

The wings of moths and butterflies are densely covered in scales that exhibit intricate shapes and sculptured nanostructures. While certain butterfly scales create nanoscale photonic effects [emphasis mine], moth scales show different nanostructures suggesting different functionality. Here we investigate moth-scale vibrodynamics to understand their role in creating acoustic camouflage against bat echolocation, where scales on wings provide ultrasound absorber functionality. For this, individual scales can be considered as building blocks with adapted biomechanical properties at ultrasonic frequencies. The 3D nanostructure of a full Bunaea alcinoe moth forewing scale was characterized using confocal microscopy. Structurally, this scale is double layered and endowed with different perforation rates on the upper and lower laminae, which are interconnected by trabeculae pillars. From these observations a parameterized model of the scale’s nanostructure was formed and its effective elastic stiffness matrix extracted. Macroscale numerical modeling of scale vibrodynamics showed close qualitative and quantitative agreement with scanning laser Doppler vibrometry measurement of this scale’s oscillations, suggesting that the governing biomechanics have been captured accurately. Importantly, this scale of B. alcinoe exhibits its first three resonances in the typical echolocation frequency range of bats, suggesting it has evolved as a resonant absorber. Damping coefficients of the moth-scale resonator and ultrasonic absorption of a scaled wing were estimated using numerical modeling. The calculated absorption coefficient of 0.50 agrees with the published maximum acoustic effect of wing scaling. Understanding scale vibroacoustic behavior helps create macroscopic structures with the capacity for broadband acoustic camouflage.

Those nanoscale photonic effects caused by butterfly scales are something I’d usually describe as optical effects due to the nanoscale structures on some butterfly wings, notably those of the Blue Morpho butterfly. In fact there’s a whole field of study on what’s known as structural colo(u)r. Strictly speaking I’m not sure you could describe the nanostructures on Glasswing butterflies as an example of structure colour since those structures make that butterfly’s wings transparent but they are definitely an optical effect. For the curious, you can use ‘blue morpho butterfly’, ‘glasswing butterfly’ or ‘structural colo(u)r’ to search for more on this blog or pursue bigger fish with an internet search.

Tractor beams for humans?

I got excited for a moment before realizing that, if tractor beams for humans result from this work, it will be many years in the future. Still, one can dream, eh? Here’s more about the current state of tractor beams (the acoustic kind) from a January 21, 2018 news item on ScienceDaily,

Acoustic tractor beams use the power of sound to hold particles in mid-air, and unlike magnetic levitation, they can grab most solids or liquids. For the first time University of Bristol engineers have shown it is possible to stably trap objects larger than the wavelength of sound in an acoustic tractor beam. This discovery opens the door to the manipulation of drug capsules or micro-surgical implements within the body. Container-less transportation of delicate larger samples is now also a possibility and could lead to levitating humans.

A January 22, 2018 University of Bristol press release (also on EurekAlert but dated January 21, 2018), which originated the news item, expands on the theme,

Researchers previously thought that acoustic tractor beams were fundamentally limited to levitating small objects as all the previous attempts to trap particles larger than the wavelength had been unstable, with objects spinning uncontrollably. This is because rotating sound field transfers some of its spinning motion to the objects causing them to orbit faster and faster until they are ejected.

The new approach, published in Physical Review Letters today [Monday 22 January]{2018}], uses rapidly fluctuating acoustic vortices, which are similar to tornadoes of sound, made of a twister-like structure with loud sound surrounding a silent core.

The Bristol researchers discovered that the rate of rotation can be finely controlled by rapidly changing the twisting direction of the vortices, this stabilises the tractor beam. They were then able to increase the size of the silent core allowing it to hold larger objects. Working with ultrasonic waves at a pitch of 40kHz, a similar pitch to that which only bats can hear, the researchers held a two-centimetre polystyrene sphere in the tractor beam. This sphere measures over two acoustic wavelengths in size and is the largest yet trapped in a tractor beam. The research suggests that, in the future much larger objects could be levitated in this way.

Dr Asier Marzo, lead author on the paper from Bristol’s Department of Mechanical Engineering, said: “Acoustic researchers had been frustrated by the size limit for years, so its satisfying to find a way to overcome it. I think it opens the door to many new applications.”

Dr Mihai Caleap, Senior Research Associate, who developed the simulations, explained: “In the future, with more acoustic power it will be possible to hold even larger objects. This was only thought to be possible using lower pitches making the experiment audible and dangerous for humans.”

Bruce Drinkwater, Professor of Ultrasonics from the Department of Mechanical Engineering, who supervised the work, added: “Acoustic tractor beams have huge potential in many applications. I’m particularly excited by the idea of contactless production lines where delicate objects are assembled without touching them.”

The researchers have included a video representing their work,

I always liked the tractor beams on Star Trek as they seemed very useful. For those who can dream in more technical language, here’s a link to and a citation for the paper,

Acoustic Virtual Vortices with Tunable Orbital Angular Momentum for Trapping of Mie Particles by Asier Marzo, Mihai Caleap, and Bruce W. Drinkwater. Phys. Rev. Lett. Vol. 120, Iss. 4 — 26 January 2018 DOI:https://doi.org/10.1103/PhysRevLett.120.044301 Published 22 January 2018

This paper is open access.

A Moebius strip of moving energy (vibrations)

This research extends a theorem which posits that waves will adapt to slowly changing conditions and return to their original vibration to note that the waves can be manipulated to a new state. A July 25, 2016 news item on ScienceDaily makes the announcement,

Yale physicists have created something similar to a Moebius strip of moving energy between two vibrating objects, opening the door to novel forms of control over waves in acoustics, laser optics, and quantum mechanics.

The discovery also demonstrates that a century-old physics theorem offers much greater freedom than had long been believed. …

A July 25, 2016 Yale University news release (also on EurekAlert) by Jim Shelton, which originated the news item, expands on the theme,

Yale’s experiment is deceptively simple in concept. The researchers set up a pair of connected, vibrating springs and studied the acoustic waves that traveled between them as they manipulated the shape of the springs. Vibrations — as well as other types of energy waves — are able to move, or oscillate, at different frequencies. In this instance, the springs vibrate at frequencies that merge, similar to a Moebius strip that folds in on itself.

The precise spot where the vibrations merge is called an “exceptional point.”

“It’s like a guitar string,” said Jack Harris, a Yale associate professor of physics and applied physics, and the study’s principal investigator. “When you pluck it, it may vibrate in the horizontal plane or the vertical plane. As it vibrates, we turn the tuning peg in a way that reliably converts the horizontal motion into vertical motion, regardless of the details of how the peg is turned.”

Unlike a guitar, however, the experiment required an intricate laser system to precisely control the vibrations, and a cryogenic refrigeration chamber in which the vibrations could be isolated from any unwanted disturbance.

The Yale experiment is significant for two reasons, the researchers said. First, it suggests a very dependable way to control wave signals. Second, it demonstrates an important — and surprising — extension to a long-established theorem of physics, the adiabatic theorem.

The adiabatic theorem says that waves will readily adapt to changing conditions if those changes take place slowly. As a result, if the conditions are gradually returned to their initial configuration, any waves in the system should likewise return to their initial state of vibration. In the Yale experiment, this does not happen; in fact, the waves can be manipulated into a new state.

“This is a very robust and general way to control waves and vibrations that was predicted theoretically in the last decade, but which had never been demonstrated before,” Harris said. “We’ve only scratched the surface here.”

In the same edition of Nature, a team from the Vienna University of Technology also presented research on a system for wave control via exceptional points.

Here’s a link to and a citation for the paper,

Topological energy transfer in an optomechanical system with exceptional points by H. Xu, D. Mason, Luyao Jiang, & J. G. E. Harris. Nature (2016) doi:10.1038/nature18604 Published online 25 July 2016

This paper is behind a paywall.

Carbon nanotubes, acoustics, and heat

I have a longstanding interest in carbon nanotubes and acoustics, which I first encountered in 2008. This latest work comes from the Michigan Technological University according to a July 28, 2015 news item on Nanowerk,

Troy Bouman reaches over, presses play, and the loudspeaker sitting on the desk starts playing the university fight song. But this is no ordinary loudspeaker. This is a carbon nanotube transducer—and it makes sound with heat.

Bouman and Mahsa Asgarisabet, both graduate students at Michigan Technological University, recently won a Best of Show Award at SAE International’s Noise and Vibration Conference and Exhibition 2015 for their acoustic research on carbon nanotube speakers. They work with Andrew Barnard, an assistant professor of mechanical engineering at Michigan Tech, to tease out the fundamental physics of these unusual loudspeakers.

While still a fledgling technology, the potential applications are nearly endless. Everything from de-icing helicopter blades to making lighter loudspeakers to doubling as a car speaker and heating filament for back windshield defrosters.

Here’s a few sound sound files featuring the students and their carbon nanotube speakers,


A July 28, 2015 Michigan Technological University news release, which originated the news item, goes on to describe how these carbon nanotubes are making sound,

The freestanding speaker itself is rather humble. In fact, it’s a bit flimsy. A teflon base props up two copper rods, and what seems like a see-through black cloth stretches between them.

“A little wind gust across them, and they would just blow away,” Barnard says. “But you could shake them as much as you want—since they have such low mass, there is virtually no inertia.”

The material is strong side to side, because what the naked eye can’t see is the collection of black nanotubes that make up that thin film.

The nanotubes are straw-like structures with walls only one carbon atom-thick and they can heat up and cool down up to 100,000 times each second. By comparison, a platinum sheet about 700 nanometers thick can only heat up and cool down about 16 times each second. The heating and cooling of the carbon nanotubes causes the adjacent air to expand and contract. That pushes air molecules around and creates sound waves.

“Traditional speakers use a moving coil, and that’s how they create sound waves,” Bouman says. “There are completely different physics behind carbon nanotube speakers.”

And because of these differences, the nearly weightless carbon nanotube speakers produce sound in a way that isn’t initially understood by our ears. Bouman’s research focuses on processing the sound waves to make them more intelligible. Take a listen.

Acoustics

To date, most research on carbon nanotubes has been on the materials side. Carbon nanotube speakers were discovered accidently in 2008, showing that the idea was viable. As mechanical engineers studying acoustics, Barnard, Bouman and Asgarisabet are refining the technology.

“They are very light weight and have no moving parts,” Asgarisabet says, which is ideal for her work in active noise control, where the carbon nanotube films could cancel out engine noise in airplanes or road noise in cars. But first, she says, “I want to focus first on getting a good thermal model of the speakers.”

Having an accurate model, Bouman adds, is a reflection of understanding the carbon nanotube loudspeakers themselves. The modeling work he and Asgarisabet are doing lays down the foundation to build up new applications for the technology.

While a lot of research remains on sorting out the underlying physics of carbon nanotube speakers, being able to use both the heat and sound properties makes them versatile. The thinness and weightlessness is also appealing.

“They’re basically conformable speakers,” Barnard says. The thin film could be draped over dashboards, windows, walls, seats and maybe even clothing. To get the speakers to that point, Barnard and his students will continue refining the technology’s efficiency and ruggedness, one carbon nanotube thin-film at a time.

As I mentioned earlier I’m quite interested in carbon nanotubes speakers and, for that matter, all other nanomaterial speakers. For example, there was a November 18, 2013 posting titled: World’s* smallest FM radio transmitter made out of graphene which also featured the Zettl Group’s (University of California at Berkeley) carbon nanotube radio (unfortunately those sound files are no longer accessible).

Dexter Johnson in a July 30, 2015 posting (on his Nanoclast blog on the Institute of Electrical and Electronics Engineers [IEEE] website) provides some additional insights (Note: Links have been removed),

It’s been some time since we covered the use of nanomaterials in audio speakers. While not a hotly pursued research field, there is some tradition for it dating back to the first development of carbon nanotube-based speakers in 2008. While nanomaterial-based speakers are not going to win any audiophile prize anytime soon, they do offer some unusual characteristics that mainly stem from their magnet-less design.