Tag Archives: NASA Voyager spacecraft

3D imaging biological cells with picosecond ultrasonics (acoustic imaging)

An April 22, 2015 news item on Nanowerk describes an acoustic imaging technique that’s been newly applied to biological cells,

Much like magnetic resonance imaging (MRI) is able to scan the interior of the human body, the emerging technique of “picosecond ultrasonics,” a type of acoustic imaging, can be used to make virtual slices of biological tissues without destroying them.

Now a team of researchers in Japan and Thailand has shown that picosecond ultrasonics can achieve micron resolution of single cells, imaging their interiors in slices separated by 150 nanometers — in stark contrast to the typical 0.5-millimeter spatial resolution of a standard medical MRI scan. This work is a proof-of-principle that may open the door to new ways of studying the physical properties of living cells by imaging them in vivo.

An April 20, 2015 American Institute of Physics news release, which originated the news item, provides a description of picosecond ultrasonics and more details about the research,

Picosecond ultrasonics has been used for decades as a method to explore the mechanical and thermal properties of materials like metals and semiconductors at submicron scales, and in recent years it has been applied to biological systems as well. The technique is suited for biology because it’s sensitive to sound velocity, density, acoustic impedance and the bulk modulus of cells.

This week, in a story appearing on the cover of the journal Applied Physics Letters, from AIP Publishing, researchers from Walailak University in Thailand and Hokkaido University in Japan describe the first known demonstration of 3-D cell imaging using picosecond ultrasonics.

Their work centers on imaging two types of mammalian biological tissue — a bovine aortic endothelial cell, a type of cell that lines a cow’s main artery blood vessel, and a mouse “adipose” fat cell. Endothelial cells were chosen because they play a key role in the physiology of blood cells and are useful in the study of biomechanics. Fat cells, on the other hand, were studied to provide an interesting comparison with varying cell geometries and contents.

How the Work was Done

The team accomplished the imaging by first placing a cell in solution on a titanium-coated sapphire substrate and then scanning a point source of high-frequency sound generated by using a beam of focused ultrashort laser pulses over the titanium film. This was followed by focusing another beam of laser pulses on the same point to pick up tiny changes in optical reflectance caused by the sound traveling through the cell tissue.

“By scanning both beams together, we’re able to build up an acoustic image of the cell that represents one slice of it,” explained co-author Professor Oliver B. Wright, who teaches in the Division of Applied Physics, Faculty of Engineering at Hokkaido University. “We can view a selected slice of the cell at a given depth by changing the timing between the two beams of laser pulses.”

The team’s work is particularly noteworthy because “in spite of much work imaging cells with more conventional acoustic microscopes, the time required for 3-D imaging probably remains too long to be practical,” Wright said. “Building up a 3-D acoustic image, in principle, allows you to see the 3-D relative positions of cell organelles without killing the cell. In our experiments in vitro, while we haven’t yet resolved the cell contents — possibly because cell nuclei weren’t contained within the slices we viewed — it should be possible in the future with various improvements to the technique.”

: Fluorescence micrographs of fat and endothelial cells superimposed on differential-interference and phase-contrast images, respectively.

Fluorescence micrographs of fat and endothelial cells superimposed on differential-interference and phase-contrast images, respectively. The nuclei are stained blue in the micrographs. The image on the right is a picosecond-ultrasonic image of a single endothelial cell with approximately 1-micron lateral and 150-nanometer depth resolutions. Deep blue corresponds to the lowest ultrasonic amplitude.
CREDIT: O. Wright/Hokkaido University

So far, the team has used infrared light to generate sound waves within the cell, “limiting the lateral spatial resolution to about one micron,” Wright explains. “By using an ultraviolet-pulsed laser, we could improve the lateral resolution by about a factor of three — and greatly improve the image quality. And, switching to a diamond substrate instead of sapphire would allow better heat conduction away from the probed area, which, in turn, would enable us to increase the laser power and image quality.”

So lowering the laser power or using substrates with higher thermal conductivity may soon open the door to in vivo imaging, which would be invaluable for investigating the mechanical properties of cell organelles within both vegetal and animal cells.

What’s next for the team? “The method we use to image the cells now actually involves a combination of optical and elastic parameters of the cell, which can’t be easily distinguished,” Wright said. “But we’ve thought of a way to separate them, which will allow us to measure the cell mechanical properties more accurately. So we’ll try this method in the near future, and we’d also like to try our method on single-celled organisms or even bacteria.”

Here’s a link to and a citation for the paper,

Three-dimensional imaging of biological cells with picosecond ultrasonics by Sorasak Danworaphong, Motonobu Tomoda, Yuki Matsumoto, Osamu Matsuda, Toshiro Ohashi, Hiromu Watanabe, Masafumi Nagayama, Kazutoshi Gohara, Paul H. Otsuka, and Oliver B. Wright. Appl. Phys. Lett. 106, 163701 (2015); http://dx.doi.org/10.1063/1.4918275

This paper is open access.

This research reminded me of a data sonification project that I featured in a Feb. 7, 2014 post which includes an embedded sound file of symphonic music based on data from NASA’s (US National Aeronautics and Space Administration) Voyager spacecraft.

Data sonification: listening to your data instead of visualizing it

Representing data though music is how a Jan. 31, 2014 item on the BBC news magazine describes a Voyager 1 & 2 spacecraft duet, data sonification project discussed* in a BBC Radio 4 programme,

Musician and physicist Domenico Vicinanza has described to BBC Radio 4’s Today programme the process of representing information through music, known as “sonification”. [includes a sound clip and interview with Vicinanza]

A Jan. 22, 2014 GÉANT news release describes the project in more detail,

GÉANT, the pan-European data network serving 50 million research and education users at speeds of up to 500Gbps, recently demonstrated its power by sonifying 36 years’ worth of NASA Voyager spacecraft data and converting it into a musical duet.

The project is the work of Domenico Vicinanza, Network Services Product Manager at GÉANT. As a trained musician with a PhD in Physics, he also takes the role of Arts and Humanities Manager, exploring new ways for representing data and discovery through the use of high-speed networks.

“I wanted to compose a musical piece celebrating the Voyager 1 and 2 *together*, so used the same measurements (proton counts from the cosmic ray detector over the last 37 years) from both spacecrafts, at the exactly same point of time, but at several billions of Kms of distance one from the other.

I used different groups of instruments and different sound textures to represent the two spacecrafts, synchronising the measurements taken at the same time.”

The result is an up-tempo string and piano orchestral piece.

You can hear the duet, which has been made available by the folks at GÉANT,

The news release goes on to provide technical details about the composition,

To compose the spacecraft duet, 320,000 measurements were first selected from each spacecraft, at one hour intervals. Then that data was converted into two very long melodies, each comprising 320,000 notes using different sampling frequencies, from a few KHz to 44.1 kHz.

The result of the conversion into waveform, using such a big dataset, created a wide collection of audible sounds, lasting just a few seconds (slightly more than 7 seconds at 44.1kHz) to a few hours (more than 5hours using 1024Hz as a sampling frequency).   A certain number of data points, from a few thousand to 44,100 were each “converted” into 1 second of sound.

Using the grid computing facilities at EGI, GÉANT was able to create the duet live at the NASA booth at Super Computing 2013 using its superfast network to transfer data to/from NASA.

I think this detail from the news release gives one a different perspective on the accomplishment,

Launched in 1977, both Voyager 1 and Voyager 2 are now decommissioned but still recording and sending live data to Earth. They continue to traverse different parts of the universe, billions of kilometres apart. Voyager 1 left our solar system last year.

The research is more than an amusing way to pass the time (from the news release),

While this project was created as a fun, accessible way to demonstrate the benefit of research and education networks to society, data sonification – representing data by means of sound signals – is increasingly used to accelerate scientific discovery; from epilepsy research to deep space discovery.

I was curious to learn more about how data represented by sound signals is being used to accelerate scientific discovery and sent that question and another to Dr. Vicinanza via Tamsin Henderson of DANTE and received these answers,

(1) How does “representing data by means of sound signals “increasingly accelerate scientific discovery; from epilepsy research to deep space discovery”? In a practical sense how does one do this research? For example, do you sit down and listen to a file and intuit different relationships for the data?

Vision and visual representation is intrinsically limited to three dimensions. We all know how amazing is 3D cinema, but in terms of representation of complex information, this is as far as it gets. There is no 4D or 5D. We live in three dimensions.

Sound, on the other hand, does not have any limitation of this kind. We can continue overlapping sound layers virtually without limits and still retain the capability of recognising and understanding them. Think of an orchestra or a pop band, even if the musicians are playing all together we can actually follow the single instrument line (bass, drum, lead guitar, voice, ….) Sound is then particularly precious when dealing with multi-dimensional data since audification techniques.

In technical terms, auditory perception of complex, structured information could have several advantages in temporal, amplitude, and frequency resolution when compared to visual representations and often opens up possibilities as an alternative or complement to visualisation techniques. Those advantages include the capability of the human ear to detect patterns (detecting regularities), recognise timbres and follow different strands at the same time (i.e. the capability of following different instrument lines). This would offer, in a natural way, the opportunity of rendering different, interdependent variables onto sounds in such a way that a listener could gain relevant insight into the represented information or data.

In particular in the medical context, there have been several investigations using data sonification as a support tool for classification and diagnosis, from working on sonification of medical images to converting EEG to tones, including real-time screening and feedback on EEG signals for epilepsy.

The idea is to use sound to aggregate many “information layers”, many more than any graph or picture can represent and support the physician giving a more comprehensive representation of the situation.

(2) I understand that as you age certain sounds disappear from your hearing, e.g., people over 25 years of age are not be able to hear above 15kHz. (Note: There seems to be some debate as to when these sounds disappear, after 30, after 20, etc.) Wouldn’t this pose an age restriction on the people who could access the research or have I misunderstood what you’re doing?

No, there is actually no sensible reduction in the advantages of sonification with ageing. The only precaution is not to use too high frequencies (above 15 KHz) in the sonification and this is something that can be avoided without limiting the benefits of audification.

It is always good practice not to use excessively high frequencies since they are not always very well and uniformly perceived by everyone.

Our hearing works at its best in the region of KHz (1200Hz-3800Hz)

Thank you Dr. Vicinanza and Tamsin Henderson for this insight into representing data in multiple dimensions using sound and its application in research. And, thank you, too, for sharing a beautiful piece of music.

For the curious, I found some additional information about Dr. Vicinanza and his ‘sound’ work on his Nature Network profile page,

I am a composer, network engineer and researcher. I received my MSc and PhD degrees in Physics and studied piano, percussion and composition.

I worked as a professor of Sound Synthesis, Acoustics and Computer Music (Algorithmic Composition) at Conservatory of Music of Salerno (Italy).

I currently work as a network engineer in DANTE (www.dante.net) and chair the ASTRA project (www.astraproject.org) for the reconstruction of musical instruments by means of computer models on GÉANT and EUMEDCONNECT.

I am also the co-founder and the technical coordinator of the Lost Sound Orchestra project (www.lostsoundsorchestra.org).

Interests

As a composer and researcher I was always fascinated by the richness of the information coming from the Nature. I worked on the introduction of the sonification of seismic signals (in particular coming from active volcanoes) as a scientific tool, co-working with geophysicists and volcanologists.

I also study applications of grid technologies for music and visual arts and as a composer I took part to several concerts, digital arts performances, festivals and webcast.

My other interests include (aside with music) Argentine Tango and watercolors.

Projects

ASTRA (Ancient instruments Sound/Timbre Reconstruction Application)
www.astraproject.org

The ASTRA project is a multi disciplinary project aiming at reconstructing the sound or timbre of ancient instruments (not existing anymore) using archaeological data as fragments from excavations, written descriptions, pictures.

The technique used is the physical modeling synthesis, a complex digital audio rendering technique which allows modeling the time-domain physics of the instrument.

In other words the basic idea is to recreate a model of the musical instrument and produce the sound by simulating its behavior as a mechanical system. The application would produce one or more sounds corresponding to different configurations of the instrument (i.e. the different notes).

Lost Sounds Orchestra
www.lostsoundsorchestra.org

The Lost Sound Orchestra is the ASTRA project orchestra. It is a unique orchestra made by reconstructed ancient instrument coming from the ASTRA research activities. It is the first ensemble in the world composed of only reconstructed instruments of the past. Listening to it is like jumping into the past, in a sound world completely new to our ears.

Since I haven’t had occasion to mention either GÉANT or DANTE previously, here’s more about those organizations and some acknowledgements from the news release,

About GÉANT

GÉANT is the pan-European research and education network that interconnects Europe’s National Research and Education Networks (NRENs). Together we connect over 50 million users at 10,000 institutions across Europe, supporting research in areas such as energy, the environment, space and medicine.

Operating at speeds of up to 500Gbps and reaching over 100 national networks worldwide, GÉANT remains the largest and most advanced research and education network in the world.

Co-funded by the European Commission under the EU’s 7th Research and Development Framework Programme, GÉANT is a flagship e-Infrastructure key to achieving the European Research Area – a seamless and open European space for online research – and assuring world-leading connectivity between Europe and the rest of the world in support of global research collaborations.

The network and associated services comprise the GÉANT (GN3plus) project, a collaborative effort comprising 41 project partners: 38 European NRENs, DANTE, TERENA and NORDUnet (representing the 5 Nordic countries). GÉANT is operated by DANTE on behalf of Europe’s NRENs.

About DANTE

DANTE (Delivery of Advanced Network Technology to Europe) is a non-profit organisation established in 1993 that plans, builds and operates large scale, advanced networks for research and education. On behalf of Europe’s National Research and Education Networks (NRENs), DANTE has built and operates GÉANT, a flagship e-Infrastructure key to achieving the European Research Area.

Working in cooperation with the European Commission and in close partnership with Europe’s NRENs and international networking partners, DANTE remains fundamental to the success of global research collaboration.

DANTE manages research and education (R&E) networking projects serving Europe (GÉANT), the Mediterranean (EUMEDCONNECT), Sub-Saharan Africa (AfricaConnect), Central Asia (CAREN) regions and coordinates Europe-China collaboration (ORIENTplus). DANTE also supports R&E networking organisations in Latin America (RedCLARA), Caribbean (CKLN) and Asia-Pacific (TEIN*CC). For more information, visit www.dante.net

Acknowledgements
NASA National Space Science Data Center and the John Hopkins University Voyager LEPC experiment.
Sonification credits
Mariapaola Sorrentino and Giuseppe La Rocca.

I hope one of these days I’ll have a chance to ask a data visualization expert  whether they think it’s possible to represent multiple dimensions visually and whether or not some types of data are better represented by sound.

* ‘described’ replaced by ‘discussed’ to avoid repetition, Feb. 10, 2014. (Sometimes I’m miffed by my own writing.)