Tag Archives: data sonification

Chandra Sonifications (extraplanetary music and data sonification)

I’m not sure why the astronomy community is so taken with creating music out of data but it seems to be the most active of the science communities in the field. This October 15. 2023 article by Elizabeth Hlavinka for Salon.com provides a little context before describing some of the latest work, Note: Links have been removed,

Christine Malec, who has been blind since birth, has always been a big astronomy buff, fascinated by major questions about the universe like what happens when a limit reaches infinity and whether things like space travel could one day become a reality. However, throughout her childhood, most astronomical information was only accessible to her via space documentaries or science fiction books.

Nearly a decade ago, Malec discovered a completely new way to experience astronomy when she saw astronomer and musician Matt Russo, Ph.D., give a presentation at a local planetarium in Toronto. Using a process called astronomical sonification, Russo had translated information collected from the TRAPPIST-1 solar system, which has seven planets locked in an orbital resonance, into something people who are blind or have low vision could experience: music. 

Russo’s song sent a wave of goosebumps through Malec’s body. Something she had previously understood intellectually but never had turned into a sensory experience was suddenly, profoundly felt.

“It was unforgettable,” Malec told Salon in a phone interview. “I compare it to what it might be like for a sighted person to look up at the night sky and get a sensory intuition of the size and nature of the cosmos. As a blind person, that’s an experience I hadn’t had.”

Through astronomical sonification, scientists map complex astronomical structures like black holes or exploded stars through the similarly expansive and multidimensional world of sound. Translating data from outer space into music not only expands access to astronomy for people who are blind or have low vision, but it also has the potential to help all scientists better understand the universe by leading to novel discoveries. Like images from the James Webb telescope that contextualize our tiny place in the universe, astronomical sonification similarly holds the power to connect listeners to the cosmos.

“It really does bring a connection that you don’t necessarily get when you’re just looking at a cluster of galaxies that’s billions of light years away from you that stretches across many hundreds of millions of light years,” said Kimberly Kowal Arcand, Ph.D., a data visualizer for NASA’s Chandra X-ray Observatory. “Having sound as a way of experiencing that type of phenomenon, that type of object, whatever it is, is a very valid way of experiencing the world around you and of making meaning.”

Malec serves as a consultant for Chandra Sonifications, which translates complex data from astronomical objects into sound. One of their most popular productions, which has been listened to millions of times, sonified a black hole in the Perseus cluster galaxy about 240 million light-years away. When presenting this sonification at this year’s [2023] SXSW festival in March, Russo, who works with Chandra through an organization he founded called SYSTEM Sounds, said this eerie sound used to depict the black hole had been likened to “millions of damned souls being sucked into the pits of hell.” 

Here’s some of what the audience at the 2023 SXSW festival heard,

If you have the time , do read Hlavinka’s October 15. 2023 article as she tells a good story with many interesting tidbits such as this (Note: Links have been removed),

William “Bill” Kurth, Ph.D., a space physicist at the University of Iowa, said the origins of astronomical sonification can be traced back to at least the 1970s when the Voyager-1 spacecraft recorded electromagnetic wave signals in space that were sent back down to his team on Earth, where they were processed as audio recordings.

Back in 1979, the team plotted the recordings on a frequency-time spectrogram similar to a voiceprint you see on apps that chart sounds like birds chirping, Kurth explained. The sounds emitted a “whistling” effect created by waves following the magnetic fields of the planet rather than going in straight lines. The data seemed to confirm what they had suspected: lightning was shocking through Jupiter’s atmosphere.

“At that time, the existence of lightning anywhere other than in Earth’s atmosphere was unknown,” Kurth told Salon in a phone interview. “This became the first time that we realized that lightning might exist on another planet.”

And this (Note: Links have been removed),

Beyond astronomy, sonification can be applied to any of the sciences, and health researchers are currently looking at tonifying DNA strands to better understand how proteins fold in multiple dimensions. Chandra is also working on constructing tactile 3-D models of astronomical phenomena, which also expands access for people who are blind or have low vision — those who have historically only been able to experience these sciences through words, Malec said.

Chandra and other sonification projects

I found a brief and somewhat puzzling description of the Chandra sonification project on one of the of US National Aeronautics and Space Administration (NASA) websites. From a September 22, 2021 posting on the Marshall Science Research and Projects Division blog (Note: Links have been removed,)

On 9/16/21, a Chandra sonification image entitled “Jingle, Pluck, and Hum: Sounds from Space” was released to the public.  Since 2020, Chandra’s “sonification” project has transformed astronomical data from some of the world’s most powerful telescopes into sound.  Three new objects — a star-forming region, a supernova remnant, and a black hole at the center of a galaxy — are being released.  Each sonification has its own technique to translate the astronomical data into sound.

For more information visit: Data Sonifications: Westerlund 2 (Multiwavelength), Tycho’s Supernova Remnant, and M87. https://www.nasa.gov/missions_pages/chandra/main/index.html.

A Chandra article entitled “Data Sonification: Sounds from the Milky Way” was also released in the NASA STEM Newsletter.  This newsletter was sent to 54,951 subscribers and shared with the office of STEM engagements social media tools with approximately 1.7M followers. For more information visit: https://myemail.constantcontact.com/NASA-EXPRESS—-Your-STEM-Connection-for-Sept–9–2021.html?soid=1131598650811&aid=iXfzAJk6x_s

I’m a little puzzled by the reference to a Chandra sonification image but I’m assuming that they also produce data visualizations. Anyway, as Hlavinka notes Chandra is a NASA X-ray Observatory and they have a number of different projects/initiatives.

Getting back to data sonification, Chandra offers various audio files on its A Universe of Sound webpage,

Here’s a sampling of three data sonification posts (there are more) here,

Enjoy!

The sounds of recent (December 2023) seismic activity in Iceland

On the heels of yesterday’s When the rocks sing “I got rhythm” (my December 18, 2023 posting), I received (via email) a media notice/reminder/update about a Northwestern University (Chicago, Illinois, US) app that allows you to listen,

From the original November 16, 2023 Northwestern University news release by Amanda Morris (also published as a November 16, 2023 news item on phys.org),

As seismic activity intensifies ahead of an impending eruption of a fissure near Iceland’s Fagradalsfjall volcano, the island’s Reykjanes Peninsula is experiencing hundreds of earthquakes per day.

Now, listeners can follow along through Northwestern University’s Earthtunes app. Developed in 2019, the app transforms seismic frequencies into audible pitches. Whereas a classic seismometer records motions in the Earth’s surface as squiggly lines scratched across a page, Earthtunes enables users to hear, rather than see, activity.

So far, Iceland’s recent, ongoing seismic activity sounds like a jarring symphony of doors slamming, hail pelting against a tin roof or window and people cracking trays of ice cubes.

By listening to activities recorded by the Global Seismographic Network station (named BORG), located to the north-northeast of Reykjavik, people can hear how the seismic activity has changed around the Fagradalsfjall area.

In this audio clip, listeners can hear 24 hours of activity recorded from Friday, Nov. 10, into Saturday, Nov. 11. Peppered with a cacophony of sharp knocking noises, it sounds like someone is insistently banging on a door.

“The activity is formidable, exciting and scary,” said Northwestern seismologist Suzan van der Lee, who co-developed Earthtunes. “Iceland did the right thing by evacuating residents in nearby Grindavik and the nearby Svartsengi geothermal power plant, one of the world’s oldest geothermal power plants, which was the first to combine electricity generation with hot water for heating in the region.”

Van der Lee is the Sarah Rebecca Roland Professor of Earth and Planetary Sciences at Northwestern’s Weinberg College of Arts and Sciences. In her research, she applies data science to millions of records of seismic waves in order to decode seismic signals, which harbor valuable information about the Earth’s interior dynamics.

As hundreds of earthquakes shake the ground, Van der Lee says the impending eruption is reminiscent of the 1973 eruption of Heimaey on Iceland’s Vestmannaeyjar archipelago.

“This level of danger is unprecedented for this area of Iceland, but not for Iceland as a whole,” said van der Lee, who hiked Fagradalsfjall in June. “While most Icelandic volcanoes erupt away from towns and other infrastructure, Icelanders share the terrible memory of an eruption 50 years ago on the island Vestmannaeyjar, during which lava covered part of that island’s town, Heimaey. The residents felt very vulnerable, as the evacuated people of Grindavik feel now. In a few days or weeks, they might no longer have their jobs, homes and most possessions, while still having to feed their families and pay their mortgages. However, partially resulting from that eruption on Vestmannaeyjar, Icelanders are well prepared for the current situation in the Fagradallsfjall-Svartsengi-Grindavik area.” 

Accelerated audio

This audio clip presents the same data, with the pitch increased by 10 octaves. Listeners will hear a long, low rumbling sound, punctuated by an occasional slamming door.

“What you’re hearing is 24 hours of seismic data — filled with earthquake signals,” van der Lee said. “The vast majority of these quakes are associated with the magma intrusion into the crust of the Fagradallsfjall-Svartsengi-Grindavik area of the Reykjanes Peninsula. Seismic data are not audible; their frequencies are too low. So, the 24 hours of data are compressed into approximately 1.5 minutes of audio data. You can hear an unprecedented intensity of earthquakes during the night from last Friday into Saturday and related to a new magma intrusion into the crust area.”

In a third audio clip, the same data is less compressed, with the pitch increased by just seven octaves

“One can hear frequent earthquakes happening at this point,” van der Lee said. “Icelandic seismologists have been monitoring these quakes and their increasing vigor and changing patterns. They recognized similar patterns to earthquake swarms that preceded the 2021-2023 eruptions of the adjacent Fagradallsfjall volcano.”

Earthtunes is supported by the American Geophysical Union and Northwestern’s department of Earth and planetary sciences. Seismic data is obtained from the Earthscope Consortium. The app was designed and developed by van der Lee, Helio Tejedor, Melanie Marzen, Igor Eufrasio, Josephine Anderson, Liam Toney, Cooper Barth, Michael Ji and Leonicio Cabrera.

Jennifer Ouellette’s November 16, 2023 article for Ars Tecnica draws heavily from the news release while delving into the topic of data sonification (making sounds from data), Note: Links have been removed,

….

Sonification of scientific data is an area of growing interest in many different fields. For instance, several years ago, a project called LHCSound built a library of the “sounds” of a top quark jet and the Higgs boson, among others. The project hoped to develop sonification as a technique for analyzing the data from particle collisions so that physicists could “detect” subatomic particles by ear. Other scientists have mapped the molecular structure of proteins in spider silk threads onto musical theory to produce the “sound” of silk in hopes of establishing a radical new way to create designer proteins. And there’s a free app for Android called the Amino Acid Synthesizer that enables users to create their own protein “compositions” from the sounds of amino acids.

The December 19, 2023 Northwestern University media update points to the latest audio file of the eruption of the svartsengi-grindavik fissure in Iceland: 24 hours as of Monday, December 18, 2023 14:00:00 UTC.

Enjoy!

One last thing, there are a number of postings about data sonification here; many but not all scientists and/or communication practitioners think to include audio files.

The sound of dirt

So you don’t get your hopes up, this acoustic story doesn’t offer any accompanying audio/acoustic files, i.e., I couldn’t find the sound of dirt.

In any event, there’s still an interesting story in an April 10, 2023 news item on phys.org,

U.K. and Australian ecologists have used audio technology to record different types of sounds in the soils of a degraded and restored forest to indicate the health of ecosystems.

Non-invasive acoustic monitoring has great potential for scientists to gather long-term information on species and their abundance, says Flinders University [Australia] researcher Dr. Jake Robinson, who conducted the study while at the University of Sheffield in England.

Photo: Pixabay

An April 8, 2023 Flinders University press release, which originated the news item, delves into the researcher’s work, Note: Links have been removed,

“Eco-acoustics can measure the health lf landscapes affected by farming, mining and deforestation but can also monitor their recovery following revegetation,” he says.  

“From earthworms and plant roots to shifting soils and other underground activity, these subtle sounds were stronger and more diverse in healthy soils – once background noise was blocked out.”   

The subterranean study used special microphones to collect almost 200 sound samples, each about three minutes long, from soil samples collected in restored and cleared forests in South Yorkshire, England. 

“Like underwater and above-ground acoustic monitoring, below-ground biodiversity monitoring using eco-acoustics has great potential,” says Flinders University co-author, Associate Professor Martin Breed. 

Since joining Flinders University, Dr Robinson has released his first book, entitled Invisible Friends (DOI: 10.53061/NZYJ2969) [emphasis mine], which covers his core research into ‘how microbes in the environment shape our lives and the world around us’. 

Now a researcher in restoration genomics at the College of Science and Engineering at Flinders University, the new book examines the powerful role invisible microbes play in ecology, immunology, psychology, forensics and even architecture.  

“Instead of considering microbes the bane of our life, as we have done during the global pandemic, we should appreciate the many benefits they bring in keeping plants animals, and ourselves, alive.”  

In another new article, Dr Robinson and colleagues call for a return to ‘nature play’ for children [emphasis mine] to expose their developing immune systems to a diverse array of microbes at a young age for better long-term health outcomes. 

“Early childhood settings should optimise both outdoor and indoor environments for enhanced exposure to diverse microbiomes for social, cognitive and physiological health,” the researchers say.  

“It’s important to remember that healthy soils feed the air with these diverse microbes,” Dr Robinson adds.  

It seems Robinson has gone on a publicity blitz, academic style, for his book. There’s a May 22, 2023 essay by Robinson, Carlos Abrahams (Senior Lecturer in Environmental Biology – Director of Bioacoustics, Nottingham Trent University); and Martin Breed (Associate Professor in Biology, Flinders University) on the Conversation, Note: A link has been removed,

Nurturing a forest ecosystem back to life after it’s been logged is not always easy.

It can take a lot of hard work and careful monitoring to ensure biodiversity thrives again. But monitoring biodiversity can be costly, intrusive and resource-intensive. That’s where ecological acoustic survey methods, or “ecoacoustics”, come into play.

Indeed, the planet sings. Think of birds calling, bats echolocating, tree leaves fluttering in the breeze, frogs croaking and bush crickets stridulating. We live in a euphonious theatre of life.

Even the creatures in the soil beneath our feet emit unique vibrations as they navigate through the earth to commute, hunt, feed and mate.

Robinson has published three papers within five months of each other, in addition to the book, which seems like heavy output to me.

First, here’s a link to and a citation for the education paper,

Optimising Early Childhood Educational Settings for Health Using Nature-Based Solutions: The Microbiome Aspect by Jake M. Robinson and Alexia Barrable. Educ. Sci. 2023, 13 (2), 211 DOI: https://doi.org/10.3390/educsci13020211
Published: 16 February 2023

This is an open access paper.

For these two links and citations, the articles seem to be very closely linked.,

The sound of restored soil: Measuring soil biodiversity in a forest restoration chronosequence with ecoacoustics by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. doi: https://doi.org/10.1101/2023.01.23.525240 Posted January 23, 2023

The sound of restored soil: using ecoacoustics to measure soil biodiversity in a temperate forest restoration context by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. Restoration Ecology, Online Version of Record before inclusion in an issue e13934 DOI: https://doi.org/10.1111/rec.13934 First published: 22 May 2023

Both links lead to open access papers.

Finally, there’s the book,

Invisible Friends; How Microbes Shape Our Lives and the World Around Us by Jake Robinson. Pelagic Publishing, 2022. ISBN 9781784274337 DOI: 10.53061/NZYJ2969

This you have to pay for.

For those would would like to hear something from nature, I have a May 27, 2022 posting, The sound of the mushroom. Enjoy!

Music of the chemical elements

It’s a little late since this work was presented at the American Chemical Society’s (ACS) Spring 2023 meeting but it’s a fascinating approach to the periodic table of elements that features a longstanding interest of mine, data sonification.

A March 26, 2023 news item on phys.org announces the then upcoming presentation abut a musical version of the periodic table of elements,

In chemistry, we have He [helium], Fe [iron] and Ca [calcium]—but what about do, re and mi? Hauntingly beautiful melodies aren’t the first things that come to mind when looking at the periodic table of the elements. However, using a technique called data sonification, a recent college graduate has converted the visible light given off by the elements into audio, creating unique, complex sounds for each one. Today [March 26, 2023], the researcher reports the first step toward an interactive, musical periodic table.

A March 26, 2023 ACS news release on EurekAlert, which originated the news item, provides more detail (the presentation abstract is included),

The researcher will present his results at the spring meeting of the American Chemical Society (ACS). ACS Spring 2023 is a hybrid meeting being held virtually and in-person March 26–30 [2023], and features more than 10,000 presentations on a wide range of science topics.

Previously, W. Walker Smith, the project’s sole investigator, took his combined passions of music and chemistry and converted the natural vibrations of molecules into a musical composition. “Then I saw visual representations of the discrete wavelengths of light released by the elements, such as scandium,” says Smith. “They were gorgeous and complex, and I thought, ‘Wow, I really want to turn these into music, too.’”

Elements emit visible light when they are energized. This light is made up of multiple individual wavelengths, or particular colors, with brightness levels that are unique for each element. But on paper, the collections of wavelengths for different elements are hard to tell apart visually, especially for the transition metals, which can have thousands of individual colors, says Smith. Converting the light into sound frequencies could be another way for people to detect the differences between elements.

However, creating sounds for the elements on the periodic table has been done before. For instance, other scientists have assigned the brightest wavelengths to single notes played by the keys on a traditional piano. But this approach reduced the rich variety of wavelengths released by some elements into just a few sounds, explains Smith, who is currently a researcher at Indiana University.

To retain as much of the complexity and nuance of the element spectra as possible, Smith consulted faculty mentors at Indiana University, including David Clemmer, Ph.D., a professor in the chemistry department, and Chi Wang, D.M.A., a professor in the Jacobs School of Music. With their assistance, Smith built a computer code for real-time audio that converted each element’s light data into mixtures of notes. The discrete color wavelengths became individual sine waves whose frequency corresponded to that of the light, and their amplitude matched the brightness of the light.

Early in the research process, Clemmer and Smith discussed the pattern similarities between light and sound vibrations. For instance, within the colors of visible light, violet has almost double the frequency of red, and in music, one doubling of frequency corresponds to an octave. Therefore, visible light can be thought of as an “octave of light.” But this octave of light is at a much higher frequency than the audible range. So, Smith scaled the sine waves’ frequencies down by approximately 10-12, fitting the audio output into a range where human ears are most sensitive to differences in pitch.

Because some elements had hundreds or thousands of frequencies, the code allowed these notes to be generated in real time, forming harmonies and beating patterns as they mixed together. “The result is that the simpler elements, such as hydrogen and helium, sound vaguely like musical chords, but the rest have a more complex collection of sounds,” says Smith. For example, calcium sounds like bells chiming together with a rhythm resulting from how the frequencies interact with each other. Listening to the notes from some other elements reminded Smith of a spooky background noise, similar to music used in cheesy horror movies. He was especially surprised by the element zinc, which despite having a large number of colors, sounded like “an angelic choir singing a major chord with vibrato.”

“Some of the notes sound out of tune, but Smith has kept true to that in this translation of the elements into music,” says Clemmer. These off-key tones — known musically as microtones — come from frequencies that are found between the keys of a traditional piano. Agreeing, Wang says, “The decisions as to what’s vital to preserve when doing data sonification are both challenging and rewarding. And Smith did a great job making such decisions from a musical standpoint.”

The next step is to turn this technology into a new musical instrument with an exhibit at the WonderLab Museum of Science, Health, and Technology in Bloomington, Indiana. “I want to create an interactive, real-time musical periodic table, which allows both children and adults to select an element and see a display of its visible light spectrum and hear it at the same time,” says Smith. He adds that this sound-based approach has potential value as an alternative teaching method in chemistry classrooms, because it’s inclusive to people with visual impairments and different learning styles.

Smith acknowledges support and funding from Indiana University’s Department of Chemistry, Center for Electronic and Computer Music, and Center for Rural Engagement; an Indiana University Undergraduate Research grant; the 2022 Annual Project Jumpstart Innovation Competition; and the Indiana University Hutton Honors College Grant Program.

A recorded media briefing on this topic will be posted Monday, March 27 [2023], by 10 a.m. Eastern time at www.acs.org/acsspring2023briefings. Reporters can request access to media briefings during the embargo period by contacting newsroom@acs.org. [The ACS 2023 Spring Meeting media briefings are freely available as of June 12, 2023. The “What do the elements sound like? Media Briefing” runs approximately 11 mins.]

If you keep going past the news release, you’ll find this presentation abstract,

Title
Designing an interactive musical periodic table: sonification of visible element emission spectra

Abstract
What does the element helium sound like? What about hydrogen? While these may seem like absurd questions, the process of data sonification can be used to convert the visible spectra of chemical elements into sounds. When stimulated by electricity or heat, elements release distinct wavelengths of light depending on their electron energy levels—a sort of “chemical footprint” unique to every element. These frequencies of light, which we perceive as different colors, can be scaled into the audio range to yield different sonic frequencies, allowing one to hear the different sounds of chemical elements. This research project involved the construction of an interactive musical periodic table, combining musical and visual representations of elemental spectra from high-resolution spectral datasets.

The interactive periodic table was designed using Max/MSP, a programming language that uses digital signal processing (DSP) algorithms to generate real-time audio and visual outputs. This allows all spectral lines of an element to be played simultaneously (as a “chord”) or for individual lines to be played in succession (as a “melody”). This highly interdisciplinary project has applications spanning data analysis, STEAM (STEM [science, technology, engineering, and mathematics] + Arts) education, and public science outreach. Sonification of scientific data provides alternative methods of analysis that can expand access of such data to blind and visually impaired people. Sonification can even enhance data analysis via traditional data visualization by providing a supplementary layer of auditory information, and sonification-based learning models have been shown to improve student engagement and understanding of scientific concepts like protein folding.

This program is currently being implemented in several middle and high school music and science classes, as well as a public music/science show titled “The Sound of Molecules” at WonderLab Museum of Science. Future work will focus on designing a free and open-source version of the program that does not require specialized DSP software.

“The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto” from Jan. 27 to July 9, 2023 at The Block Museum of Art (Northwestern University, Chicago, Illinois)

I’m sorry to be late. Thankfully this show extends into July 2023, so, there’s still plenty of time to get to Chicago’s The Block Museum of Art (at Northwestern University) art/science exhibition. I found this on the museum’s exhibition page for “The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto” show,

What do we owe to the memories of one another’s hearts?

For American artist Dario Robleto (b. 1972), artists and scientists share a common aspiration: to increase the sensitivity of their observations. Throughout the history of scientific invention, instruments like the cardiograph and the telescope have extended the reach of perception from the tiniest stirrings of the human body to the farthest reaches of space. In his prints, sculptures, and video and sound installations, Robleto contemplates the emotional significance of these technologies, bringing us closer to the latent traces of life buried in the scientific record.

The Hearts Knowledge concentrates on the most recent decade of Robleto’s creative practice, a period of deepening engagement with histories of medicine, biomedical engineering, sound recording, and space exploration. The exhibition organizes the artist’s conceptually ambitious, elegantly wrought artworks as a series of multisensory encounters between art and science.  Each work seeks to attune viewers to the material traces of life at scales ranging from the intimate to the universal, returning always to the question: Does empathy extend beyond the boundaries of time and space?

Banner image for “The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto” exhibition page. Courtesy of The Block Museum of Art (Northwestern University) and artist, Dario Robleto

Here’s more from a January 27, 2023 Northwestern University news release (received via email),

Exhibition searches for meaning at the limits of science and perception

“The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto” is on view Jan. 27 to July 9 [2023] at The Block Museum of Art

  • Works are informed by dialogue with Northwestern Engineering researchers during five-year residency  
  • Exhibition a tribute to NASA Golden Record creator, ‘whose heart has left the solar system’
  • Opening conversation with the artist will take place at 2 p.m. on [Saturday] Feb. 4 [2023]

American artist Dario Robleto (b. 1972) believes artists and scientists share a common aspiration: to increase the sensitivity of their observations.

From understanding the human body’s pulses and brainwaves to viewing the faintest glimmers of light from the edge of the observable universe, groundbreaking science pushes the limits of perception. Similarly, the perceptive work of artists can extend the boundaries of empathy and understanding.

Since 2018, Robleto served as an artist-at-large at the McCormick School of Engineering. This unique partnership between The Block Museum and McCormick gave the artist an open “hall pass” to learn from, collaborate with and question scientists, engineers and experts from across the University.

Robleto’s five-year residency concludes with the exhibition “The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto.” Co-presented by The Block Museum and McCormick, the exhibition is on view from Jan. 27 to July 9 [2023] at The Block Museum, 40 Arts Circle Drive on Northwestern’s Evanston campus. [emphasis mine]

A free opening conversation with the artist will take place at 2 p.m. on Saturday, Feb. 4 [2023], in Norris University Center’s McCormick Auditorium, 1999 Campus Drive in Evanston.

About the Exhibition: “The Heart’s Knowledge”

Throughout the history of scientific invention, instruments like the cardiograph and the telescope have extended the reach of perception from the tiniest stirrings of the human body to the farthest reaches of space.

Robleto’s prints, sculptures and video and sound installations contemplate the emotional significance of these technologies, bringing viewers closer to the latent traces of life buried in the scientific record.

“The Heart’s Knowledge” represents a decade of Robleto’s creative practice, from 2012 to 2022, a period marked by a deepening engagement with science, including astronomy, synthetic biology and exobiology, and a widening embrace of new materials and creative forms, from 3D-printed objects to film.

Robleto dedicates the exhibition to Ann Druyan, the creative director of NASA’s Golden Record for the Voyager 1 and 2 projects. The record includes Druyan’s brainwaves and heartbeats, recorded as she reflected on her secret love for famed astronomer and future husband Carl Sagan. The act of sneaking “love on board the Voyager” inspired Robleto to compose a love letter to the only human whose “heart has left the solar system.”

Robleto sees Druyan’s act to include her emotions on the record as the central inspiration of his work. “I consider it the greatest work of subversive, avant-garde art not yet given its due,” Robleto said. “The Golden Record and Ann’s radical act brought us all together to think about what it means to be human — to one another and to unknown beings on other worlds.”

The exhibition organizes the artist’s conceptually ambitious, elegantly wrought artworks as a series of multisensory encounters between art and science. Each asks viewers to seek out the material traces of life in scales ranging from the intimate to the universal, and to question: Does empathy extend beyond the boundaries of time and space?

“Whether he’s addressing the most minute phenomena of the body or the horizons of the known universe, Robleto binds the rigor of scientific inquiry with artistic expression,” said exhibition curator Michael Metzger, The Block’s Pick-Laudati Curator of Media Arts.

“Straining at the bounds of observation, Robleto discovers unity at the limits; the common endeavor of art and science to achieve a form of knowledge that language alone cannot speak,” Metzger said.

The exhibition includes three sections:

Heartbeats
Rooted in the artist’s longstanding fascination with the clinical and cultural history of the human heart, “Heartbeats” draws inspiration from 19th-century pioneers of cardiography, whose instruments graphically measured heart activity for the first time, leaving behind poignant records of human subjectivity. In “The First Time, the Heart (A Portrait of Life 1854-1913)” (2017), Robleto transforms early measurements of heartbeats into photolithographs executed on paper hand-sooted with candle flames. For the installation “The Pulse Armed with a Pen (An Unknown History of the Human Heartbeat)” (2014), Robleto collaborated with sound historian Patrick Feaster to digitally resurrect these heartbeats in audio form, giving visitors access to intimate pulses of life recorded before the invention of sound playback.

Wavelengths
Robleto has recently embraced digital video to create works that narrate transformational episodes in the recording and study of wave phenomena. “Wavelengths” comprises two hour-long immersive video installations. “The Boundary of Life is Quietly Crossed” (2019) is inspired by NASA’s Voyager Golden Record, a gold-plated phonographic disc launched into space onboard the Voyager I and II space probes in 1977. In “The Aorta of an Archivist” (2020-2021), Robleto investigates three breakthroughs in the history of recording: the first recording of a choral performance made with an Edison wax cylinder, the first heartbeat captured while listening to music and the first effort to transcribe the brain wave activity of a dreaming subject.

Horizons
In the final section, “Horizons,” Robleto evokes the spirit of the Hubble telescope and the search for extraterrestrial life, gazing out at the boundaries of the observable universe. Inspired by his time as an artist-in-residence at the SETI Institute (Search for Extraterrestrial Intelligence) and as artistic consultant to the Breakthrough Initiatives, his intricate sculptures, such as “Small Crafts on Sisyphean Seas” (2018), give shape to the speculative search for intelligent life in the universe. Other works like “The Computer of Jupiter” (2019) are framed as “gifts for extraterrestrials” offering an alternative view of the best way to begin a dialogue with alien intelligences.

The Artist-at-Large Program at Northwestern

Lisa Corrin, the Ellen Philips Katz Executive Director of The Block Museum of Art, and Julio M. Ottino, dean of McCormick School of Engineering, envisioned the possibilities of this unconventional partnership between scientist and artist when they launched the artist-at-large initiative together. The work is part of an ongoing Art + Engineering initiative and a part of the whole-brain engineering philosophy at Northwestern Engineering.

“Here, a university’s school of engineering and its art museum come together in the shared belief that transformative innovation can happen at the intersections of usually distinct academic disciplines and modes of creativity and inquiry,” Corrin said. “We had faith that something meaningful would emerge organically if we merely provided structures in which informal interactions might take place.”

“We wanted to model for young engineers the value of embracing uncertainty as part of the journey that leads to innovation and opens pathways within the imagination — as artists do,” Ottino said. “We are grateful to Dario Robleto for accepting our invitation to come to Northwestern and to enter the unknown with us. He has taught us that our shared future resides in our capacity for compassion and for empathy, the ethos at the heart of his work that holds the most promise for those at the forefront of science in the interest of humankind.”

More information about Robleto’s residency can be found in the article “Dario is our Socrates” on the Block Museum website and by viewing the Northwestern Engineering video “Artist-at-Large Program: Dario Robleto.”

Exhibition Events

“The Heart’s Knowledge” will include six months of events and dialogues that will illuminate the intersections in Robleto’s practice. All events are free and open to the public. For current program information, visit The Block Museum website.

Program highlights for February and March include:

Science, Art and the Search for Meaning: Opening Conversation with Dario Robleto
Saturday, Feb. 4 [2023], 2 p.m.
Norris University Center, McCormick Auditorium
1999 Campus Drive

The Block Museum hosts a discussion that reaches across boundaries to examine the shared pursuit that binds artists and scientists. The conversation features artist Dario Robleto; Jennifer Roberts, professor of the humanities at Harvard University; Lucianne Walkowicz, astronomer and co-founder of the JustSpace Alliance; and Michael Metzger, Pick-Laudati Curator of Media Arts and curator of “The Heart’s Knowledge.”

“X: The Man with the X-Ray Eyes” (1963)
Friday, Feb. 10 [2023], 7 p.m.
Block Cinema
40 Arts Circle Drive

A Science on Screen program with Catherine Belling, associate professor of medical education at Northwestern University Feinberg School of Medicine.

“First Man” (2018)
Saturday Feb. 18 [2023], 1 p.m.
Block Cinema
40 Arts Circle Drive

A Science on Screen program featuring history researcher Jordan Bimm of the University of Chicago, who will discuss the military origins of “space medicine.”

Exhibition Conversation: Interstellar Aesthetics and Acts of Translation in Art and Science
Wednesday, Feb. 22 [2023], 6 p.m.
Block Museum

Joining artist Dario Robleto in conversation are Elizabeth Kessler, exhibition publication contributor and a lecturer in American Studies at Stanford University, and Shane Larson, research professor of physics and astronomy and associate director of CIERA (Center for Interdisciplinary Exploration and Research in Astrophysics) at Northwestern.

Gallery Talk: Stillness, Wonder and Gifts for Extraterrestrials
Thursday, Feb. 23 [2023], 12:30 p.m.
Block Museum

Elizabeth Kessler of Stanford University will discuss Robleto’s “gifts for extraterrestrials” series.

Online Conversation: Ann Druyan, The Golden Record and the Memory of Our Hearts
Wednesday, March 8 [2023], 6 p.m. [ET]
Block Cinema

Ann Druyan, creative director for NASA’s Voyager Interstellar Messaging Project and writer and producer of the PBS television series “Cosmos,” joins Robleto and art historian Jennifer Roberts for a conversation about the Golden Record and the heart’s memory.

Exhibition Publication

In conjunction with the exhibition, The Block Museum of Art and the McCormick School of Engineering are proud to announce the publication of “The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto,” (Artbook, D.A.P., 2023).

The publication is edited by Michael Metzger with contributions by Metzger, Robert M. Brain, Daniel K. L. Chua, Patrick Feaster, Stefan Helmreich, Elizabeth A. Kessler ,Julius B. Lucks, Elizabeth Kathleen Mitchell, Alexander Rehding, Jennifer L. Roberts, Claire Isabel Webb and Dario Robleto.

About Dario Robleto

Dario Robleto was born in San Antonio, Texas, in 1972 and received his BFA from the University of Texas at San Antonio in 1997. He lives and works in Houston, Texas. The artist has had numerous solo exhibitions since 1997, most recently at the Spencer Museum of Art, Lawrence, Kansas (2021); the Radcliffe Institute for Advanced Study at Harvard University (2019); the McNay Museum, San Antonio, Texas (2018); Menil Collection, Houston, Texas (2014); the Baltimore Museum of Art (2014); the New Orleans Museum of Art (2012); and the Museum of Contemporary Art, Denver (2011).

He is currently working on his first book, “Life Signs: The Tender Science of the Pulsewave,” co-authored with art historian Jennifer Roberts, the Elizabeth Cary Agassiz Professor of the Humanities at Harvard (University of Chicago Press).

Exhibition Credits

“The Heart’s Knowledge: Science and Empathy in the Art of Dario Robleto” exhibition is made possible through a partnership with the Robert R. McCormick School of Engineering and Applied Science at Northwestern University. Major support also was provided by the National Endowment for the Arts. Additional support is contributed by the Dorothy J. Speidel Fund; the Bernstein Family Contemporary Art Fund; the Barry and Mary Ann MacLean Fund for Art and Engineering; the Illinois Arts Council Agency; and the Alumnae of Northwestern University. The exhibition publication is made possible in part by the Sandra L. Riggs Publications Fund.

Should you be in the Chicago area and interested in the exhibit, you can find all the information for your visit here.

FrogHeart’s 2022 comes to an end as 2023 comes into view

I look forward to 2023 and hope it will be as stimulating as 2022 proved to be. Here’s an overview of the year that was on this blog:

Sounds of science

It seems 2022 was the year that science discovered the importance of sound and the possibilities of data sonification. Neither is new but this year seemed to signal a surge of interest or maybe I just happened to stumble onto more of the stories than usual.

This is not an exhaustive list, you can check out my ‘Music’ category for more here. I have tried to include audio files with the postings but it all depends on how accessible the researchers have made them.

Aliens on earth: machinic biology and/or biological machinery?

When I first started following stories in 2008 (?) about technology or machinery being integrated with the human body, it was mostly about assistive technologies such as neuroprosthetics. You’ll find most of this year’s material in the ‘Human Enhancement’ category or you can search the tag ‘machine/flesh’.

However, the line between biology and machine became a bit more blurry for me this year. You can see what’s happening in the titles listed below (you may recognize the zenobot story; there was an earlier version of xenobots featured here in 2021):

This was the story that shook me,

Are the aliens going to come from outer space or are we becoming the aliens?

Brains (biological and otherwise), AI, & our latest age of anxiety

As we integrate machines into our bodies, including our brains, there are new issues to consider:

  • Going blind when your neural implant company flirts with bankruptcy (long read) April 5, 2022 posting
  • US National Academies Sept. 22-23, 2022 workshop on techno, legal & ethical issues of brain-machine interfaces (BMIs) September 21, 2022 posting

I hope the US National Academies issues a report on their “Brain-Machine and Related Neural Interface Technologies: Scientific, Technical, Ethical, and Regulatory Issues – A Workshop” for 2023.

Meanwhile the race to create brainlike computers continues and I have a number of posts which can be found under the category of ‘neuromorphic engineering’ or you can use these search terms ‘brainlike computing’ and ‘memristors’.

On the artificial intelligence (AI) side of things, I finally broke down and added an ‘artificial intelligence (AI) category to this blog sometime between May and August 2021. Previously, I had used the ‘robots’ category as a catchall. There are other stories but these ones feature public engagement and policy (btw, it’s a Canadian Science Policy Centre event), respectively,

  • “The “We are AI” series gives citizens a primer on AI” March 23, 2022 posting
  • “Age of AI and Big Data – Impact on Justice, Human Rights and Privacy Zoom event on September 28, 2022 at 12 – 1:30 pm EDT” September 16, 2022 posting

These stories feature problems, which aren’t new but seem to be getting more attention,

While there have been issues over AI, the arts, and creativity previously, this year they sprang into high relief. The list starts with my two-part review of the Vancouver Art Gallery’s AI show; I share most of my concerns in part two. The third post covers intellectual property issues (mostly visual arts but literary arts get a nod too). The fourth post upends the discussion,

  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (1 of 2): The Objects” July 28, 2022 posting
  • “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations” July 28, 2022 posting
  • “AI (artificial intelligence) and art ethics: a debate + a Botto (AI artist) October 2022 exhibition in the Uk” October 24, 2022 posting
  • Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms? August 30, 2022 posting

Interestingly, most of the concerns seem to be coming from the visual and literary arts communities; I haven’t come across major concerns from the music community. (The curious can check out Vancouver’s Metacreation Lab for Artificial Intelligence [located on a Simon Fraser University campus]. I haven’t seen any cautionary or warning essays there; it’s run by an AI and creativity enthusiast [professor Philippe Pasquier]. The dominant but not sole focus is art, i.e., music and AI.)

There is a ‘new kid on the block’ which has been attracting a lot of attention this month. If you’re curious about the latest and greatest AI anxiety,

  • Peter Csathy’s December 21, 2022 Yahoo News article (originally published in The WRAP) makes this proclamation in the headline “Chat GPT Proves That AI Could Be a Major Threat to Hollywood Creatives – and Not Just Below the Line | PRO Insight”
  • Mouhamad Rachini’s December 15, 2022 article for the Canadian Broadcasting Corporation’s (CBC) online news overs a more generalized overview of the ‘new kid’ along with an embedded CBC Radio file which runs approximately 19 mins. 30 secs. It’s titled “ChatGPT a ‘landmark event’ for AI, but what does it mean for the future of human labour and disinformation?” The chat bot’s developer, OpenAI, has been mentioned here many times including the previously listed July 28, 2022 posting (part two of the VAG review) and the October 24, 2022 posting.

Opposite world (quantum physics in Canada)

Quantum computing made more of an impact here (my blog) than usual. it started in 2021 with the announcement of a National Quantum Strategy in the Canadian federal government budget for that year and gained some momentum in 2022:

  • “Quantum Mechanics & Gravity conference (August 15 – 19, 2022) launches Vancouver (Canada)-based Quantum Gravity Institute and more” July 26, 2022 posting Note: This turned into one of my ‘in depth’ pieces where I comment on the ‘Canadian quantum scene’ and highlight the appointment of an expert panel for the Council of Canada Academies’ report on Quantum Technologies.
  • “Bank of Canada and Multiverse Computing model complex networks & cryptocurrencies with quantum computing” July 25, 2022 posting
  • “Canada, quantum technology, and a public relations campaign?” December 29, 2022 posting

This one was a bit of a puzzle with regard to placement in this end-of-year review, it’s quantum but it’s also about brainlike computing

It’s getting hot in here

Fusion energy made some news this year.

There’s a Vancouver area company, General Fusion, highlighted in both postings and the October posting includes an embedded video of Canadian-born rapper Baba Brinkman’s “You Must LENR” [L ow E nergy N uclear R eactions or sometimes L attice E nabled N anoscale R eactions or Cold Fusion or CANR (C hemically A ssisted N uclear R eactions)].

BTW, fusion energy can generate temperatures up to 150 million degrees Celsius.

Ukraine, science, war, and unintended consequences

Here’s what you might expect,

These are the unintended consequences (from Rachel Kyte’s, Dean of the Fletcher School, Tufts University, December 26, 2022 essay on The Conversation [h/t December 27, 2022 news item on phys.org]), Note: Links have been removed,

Russian President Vladimir Putin’s war on Ukraine has reverberated through Europe and spread to other countries that have long been dependent on the region for natural gas. But while oil-producing countries and gas lobbyists are arguing for more drilling, global energy investments reflect a quickening transition to cleaner energy. [emphasis mine]

Call it the Putin effect – Russia’s war is speeding up the global shift away from fossil fuels.

In December [2022?], the International Energy Agency [IEA] published two important reports that point to the future of renewable energy.

First, the IEA revised its projection of renewable energy growth upward by 30%. It now expects the world to install as much solar and wind power in the next five years as it installed in the past 50 years.

The second report showed that energy use is becoming more efficient globally, with efficiency increasing by about 2% per year. As energy analyst Kingsmill Bond at the energy research group RMI noted, the two reports together suggest that fossil fuel demand may have peaked. While some low-income countries have been eager for deals to tap their fossil fuel resources, the IEA warns that new fossil fuel production risks becoming stranded, or uneconomic, in the next 20 years.

Kyte’s essay is not all ‘sweetness and light’ but it does provide a little optimism.

Kudos, nanotechnology, culture (pop & otherwise), fun, and a farewell in 2022

This one was a surprise for me,

Sometimes I like to know where the money comes from and I was delighted to learn of the Ărramăt Project funded through the federal government’s New Frontiers in Research Fund (NFRF). Here’s more about the Ărramăt Project from the February 14, 2022 posting,

“The Ărramăt Project is about respecting the inherent dignity and interconnectedness of peoples and Mother Earth, life and livelihood, identity and expression, biodiversity and sustainability, and stewardship and well-being. Arramăt is a word from the Tamasheq language spoken by the Tuareg people of the Sahel and Sahara regions which reflects this holistic worldview.” (Mariam Wallet Aboubakrine)

Over 150 Indigenous organizations, universities, and other partners will work together to highlight the complex problems of biodiversity loss and its implications for health and well-being. The project Team will take a broad approach and be inclusive of many different worldviews and methods for research (i.e., intersectionality, interdisciplinary, transdisciplinary). Activities will occur in 70 different kinds of ecosystems that are also spiritually, culturally, and economically important to Indigenous Peoples.

The project is led by Indigenous scholars and activists …

Kudos to the federal government and all those involved in the Salmon science camps, the Ărramăt Project, and other NFRF projects.

There are many other nanotechnology posts here but this appeals to my need for something lighter at this point,

  • “Say goodbye to crunchy (ice crystal-laden) in ice cream thanks to cellulose nanocrystals (CNC)” August 22, 2022 posting

The following posts tend to be culture-related, high and/or low but always with a science/nanotechnology edge,

Sadly, it looks like 2022 is the last year that Ada Lovelace Day is to be celebrated.

… this year’s Ada Lovelace Day is the final such event due to lack of financial backing. Suw Charman-Anderson told the BBC [British Broadcasting Corporation] the reason it was now coming to an end was:

You can read more about it here:

In the rearview mirror

A few things that didn’t fit under the previous heads but stood out for me this year. Science podcasts, which were a big feature in 2021, also proliferated in 2022. I think they might have peaked and now (in 2023) we’ll see what survives.

Nanotechnology, the main subject on this blog, continues to be investigated and increasingly integrated into products. You can search the ‘nanotechnology’ category here for posts of interest something I just tried. It surprises even me (I should know better) how broadly nanotechnology is researched and applied.

If you want a nice tidy list, Hamish Johnston in a December 29, 2022 posting on the Physics World Materials blog has this “Materials and nanotechnology: our favourite research in 2022,” Note: Links have been removed,

“Inherited nanobionics” makes its debut

The integration of nanomaterials with living organisms is a hot topic, which is why this research on “inherited nanobionics” is on our list. Ardemis Boghossian at EPFL [École polytechnique fédérale de Lausanne] in Switzerland and colleagues have shown that certain bacteria will take up single-walled carbon nanotubes (SWCNTs). What is more, when the bacteria cells split, the SWCNTs are distributed amongst the daughter cells. The team also found that bacteria containing SWCNTs produce a significantly more electricity when illuminated with light than do bacteria without nanotubes. As a result, the technique could be used to grow living solar cells, which as well as generating clean energy, also have a negative carbon footprint when it comes to manufacturing.

Getting to back to Canada, I’m finding Saskatchewan featured more prominently here. They do a good job of promoting their science, especially the folks at the Canadian Light Source (CLS), Canada’s synchrotron, in Saskatoon. Canadian live science outreach events seeming to be coming back (slowly). Cautious organizers (who have a few dollars to spare) are also enthusiastic about hybrid events which combine online and live outreach.

After what seems like a long pause, I’m stumbling across more international news, e.g. “Nigeria and its nanotechnology research” published December 19, 2022 and “China and nanotechnology” published September 6, 2022. I think there’s also an Iran piece here somewhere.

With that …

Making resolutions in the dark

Hopefully this year I will catch up with the Council of Canadian Academies (CCA) output and finally review a few of their 2021 reports such as Leaps and Boundaries; a report on artificial intelligence applied to science inquiry and, perhaps, Powering Discovery; a report on research funding and Natural Sciences and Engineering Research Council of Canada.

Given what appears to a renewed campaign to have germline editing (gene editing which affects all of your descendants) approved in Canada, I might even reach back to a late 2020 CCA report, Research to Reality; somatic gene and engineered cell therapies. it’s not the same as germline editing but gene editing exists on a continuum.

For anyone who wants to see the CCA reports for themselves they can be found here (both in progress and completed).

I’m also going to be paying more attention to how public relations and special interests influence what science is covered and how it’s covered. In doing this 2022 roundup, I noticed that I featured an overview of fusion energy not long before the breakthrough. Indirect influence on this blog?

My post was precipitated by an article by Alex Pasternak in Fast Company. I’m wondering what precipitated Alex Pasternack’s interest in fusion energy since his self-description on the Huffington Post website states this “… focus on the intersections of science, technology, media, politics, and culture. My writing about those and other topics—transportation, design, media, architecture, environment, psychology, art, music … .”

He might simply have received a press release that stimulated his imagination and/or been approached by a communications specialist or publicists with an idea. There’s a reason for why there are so many public relations/media relations jobs and agencies.

Que sera, sera (Whatever will be, will be)

I can confidently predict that 2023 has some surprises in store. I can also confidently predict that the European Union’s big research projects (1B Euros each in funding for the Graphene Flagship and Human Brain Project over a ten year period) will sunset in 2023, ten years after they were first announced in 2013. Unless, the powers that be extend the funding past 2023.

I expect the Canadian quantum community to provide more fodder for me in the form of a 2023 report on Quantum Technologies from the Council of Canadian academies, if nothing else otherwise.

I’ve already featured these 2023 science events but just in case you missed them,

  • 2023 Preview: Bill Nye the Science Guy’s live show and Marvel Avengers S.T.A.T.I.O.N. (Scientific Training And Tactical Intelligence Operative Network) coming to Vancouver (Canada) November 24, 2022 posting
  • September 2023: Auckland, Aotearoa New Zealand set to welcome women in STEM (science, technology, engineering, and mathematics) November 15, 2022 posting

Getting back to this blog, it may not seem like a new year during the first few weeks of 2023 as I have quite the stockpile of draft posts. At this point I have drafts that are dated from June 2022 and expect to be burning through them so as not to fall further behind but will be interspersing them, occasionally, with more current posts.

Most importantly: a big thank you to everyone who drops by and reads (and sometimes even comments) on my posts!!! it’s very much appreciated and on that note: I wish you all the best for 2023.

Classical music makes protein songs easier listening

Caption: This audio is oxytocin receptor protein music using the Fantasy Impromptu guided algorithm. Credit: Chen et al. / Heliyon

A September 29, 2021 news item on ScienceDaily describes new research into music as a means of communicating science,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Scientists (Peng Zhang, Postdoctoral Researcher in Computational Biology at The Rockefeller University, and Yuzong Chen, Professor of Pharmacy at National University of Singapore [NUS]) wrote a September 29, 2021 essay for The Conversation about their protein songs (Note: Links have been removed),

There are many surprising analogies between proteins, the basic building blocks of life, and musical notation. These analogies can be used not only to help advance research, but also to make the complexity of proteins accessible to the public.

We’re computational biologists who believe that hearing the sound of life at the molecular level could help inspire people to learn more about biology and the computational sciences. While creating music based on proteins isn’t new, different musical styles and composition algorithms had yet to be explored. So we led a team of high school students and other scholars to figure out how to create classical music from proteins.

The musical analogies of proteins

Proteins are structured like folded chains. These chains are composed of small units of 20 possible amino acids, each labeled by a letter of the alphabet.

A protein chain can be represented as a string of these alphabetic letters, very much like a string of music notes in alphabetical notation.

Protein chains can also fold into wavy and curved patterns with ups, downs, turns and loops. Likewise, music consists of sound waves of higher and lower pitches, with changing tempos and repeating motifs.

Protein-to-music algorithms can thus map the structural and physiochemical features of a string of amino acids onto the musical features of a string of notes.

Enhancing the musicality of protein mapping

Protein-to-music mapping can be fine-tuned by basing it on the features of a specific music style. This enhances musicality, or the melodiousness of the song, when converting amino acid properties, such as sequence patterns and variations, into analogous musical properties, like pitch, note lengths and chords.

For our study, we specifically selected 19th-century Romantic period classical piano music, which includes composers like Chopin and Schubert, as a guide because it typically spans a wide range of notes with more complex features such as chromaticism, like playing both white and black keys on a piano in order of pitch, and chords. Music from this period also tends to have lighter and more graceful and emotive melodies. Songs are usually homophonic, meaning they follow a central melody with accompaniment. These features allowed us to test out a greater range of notes in our protein-to-music mapping algorithm. In this case, we chose to analyze features of Chopin’s “Fantaisie-Impromptu” to guide our development of the program.

If you have the time, I recommend reading the essay in its entirety and listening to the embedded audio files.

The September 29, 2021 Cell Press news release on EurekAlert repeats some of the same material but is worth reading on its own merits,

In recent years, scientists have created music based on the structure of proteins as a creative way to better popularize science to the general public, but the resulting songs haven’t always been pleasant to the ear. In a study appearing September 29 [2021] in the journal Heliyon, researchers use the style of existing music genres to guide the structure of protein song to make it more musical. Using the style of Frédéric Chopin’s Fantaisie-Impromptu and other classical pieces as a guide, the researchers succeeded in converting proteins into song with greater musicality.

Creating unique melodies from proteins is achieved by using a protein-to-music algorithm. This algorithm incorporates specific elements of proteins—like the size and position of amino acids—and maps them to various musical elements to create an auditory “blueprint” of the proteins’ structure.

“Existing protein music has mostly been designed by simple mapping of certain amino acid patterns to fundamental musical features such as pitches and note lengths, but they do not map well to more complex musical features such as rhythm and harmony,” says senior author Yu Zong Chen, a professor in the Department of Pharmacy at National University of Singapore. “By focusing on a music style, we can guide more complex mappings of combinations of amino acid patterns with various musical features.”

For their experiment, researchers analyzed the pitch, length, octaves, chords, dynamics, and main theme of four pieces from the mid-1800s Romantic era of classical music. These pieces, including Fantasie-Impromptu from Chopin and Wanderer Fantasy from Franz Schubert, were selected to represent the notable Fantasy-Impromptu genre that emerged during that time.

“We chose the specific music style of a Fantasy-Impromptu as it is characterized by freedom of expression, which we felt would complement how proteins regulate much of our bodily functions, including our moods,” says co-author Peng Zhang (@zhangpeng1202), a post-doctoral fellow at the Rockefeller University

Likewise, several of the proteins in the study were chosen for their similarities to the key attributes of the Fantasy-Impromptu style. Most of the 18 proteins tested regulate functions including human emotion, cognition, sensation, or performance which the authors say connect to the emotional and expressive of the genre.

Then, they mapped 104 structural, physicochemical, and binding amino acid properties of those proteins to the six musical features. “We screened the quantitative profile of each amino acid property against the quantized values of the different musical features to find the optimal mapped pairings. For example, we mapped the size of amino acid to note length, so that having a larger amino acid size corresponds to a shorter note length,” says Chen.

Across all the proteins tested, the researchers found that the musicality of the proteins was significantly improved. In particular, the protein receptor for oxytocin (OXTR) was judged to have one of the greatest increases in musicality when using the genre-guided algorithm, compared to an earlier version of the protein-to-music algorithm.

“The oxytocin receptor protein generated our favorite song,” says Zhang. “This protein sequence produced an identifiable main theme that repeats in rhythm throughout the piece, as well as some interesting motifs and patterns that recur independent of our algorithm. There were also some pleasant harmonic progressions; for example, many of the seventh chords naturally resolve.”

The authors do note, however, that while the guided algorithm increased the overall musicality of the protein songs, there is still much progress to be made before it resembles true human music.

“We believe a next step is to explore more music styles and more complex combinations of amino acid properties for enhanced musicality and novel music pieces. Another next step, a very important step, is to apply artificial intelligence to jointly learn complex amino acid properties and their combinations with respect to the features of various music styles for creating protein music of enhanced musicality,” says Chen.

###

Research supported by the National Key R&D Program of China, the National Natural Science Foundation of China, and Singapore Academic Funds.

Here’s a link to and a citation for the paper,

Protein Music of Enhanced Musicality by Music Style Guided Exploration of Diverse Amino Acid Properties by Nicole WanNi Tay, Fanxi Liu, Chaoxin Wang, Hui Zhang, Peng Zhang, Yu Zong Chen. Heliyon, 2021 DOI: https:// doi.org/10.1016/j.heliyon.2021.e07933 Published; September 29, 2021

This paper appears to be open access.

Tree music

Hidden Life Radio livestreams music generated from trees (their biodata, that is). Kristin Toussaint in her August 3, 2021 article for Fast Company describes the ‘radio station’, Note: Links have been removed,

Outside of a library in Cambridge, Massachusetts, an over-80-year-old copper beech tree is making music.

As the tree photosynthesizes and absorbs and evaporates water, a solar-powered sensor attached to a leaf measures the micro voltage of all that invisible activity. Sound designer and musician Skooby Laposky assigned a key and note range to those changes in this electric activity, turning the tree’s everyday biological processes into an ethereal song.

That music is available on Hidden Life Radio, an art project by Laposky, with assistance from the Cambridge Department of Public Works Urban Forestry, and funded in part by a grant from the Cambridge Arts Council. Hidden Life Radio also features the musical sounds of two other Cambridge trees: a honey locust and a red oak, both located outside of other Cambridge library branches. The sensors on these trees are solar-powered biodata sonification kits, a technology that has allowed people to turn all sorts of plant activity into music.

… Laposky has created a musical voice for these disappearing trees, and he hopes people tune into Hidden Life Radio and spend time listening to them over time. The music they produce occurs in real time, affected by the weather and whatever the tree is currently doing. Some days they might be silent, especially when there’s been several days without rain, and they’re dehydrated; Laposky is working on adding an archive that includes weather information, so people can go back and hear what the trees sound like on different days, under different conditions. The radio will play 24 hours a day until November, when the leaves will drop—a “natural cycle for the project to end,” Laposky says, “when there aren’t any leaves to connect to anymore.”

The 2021 season is over but you can find an archive of Hidden Life Radio livestreams here. Or, if you happen to be reading this page sometime after January 2022, you can try your luck and click here at Hidden Life Radio livestreams but remember, even if the project has started up again, the tree may not be making music when you check in. So, if you don’t hear anything the first time, try again.

Want to create your own biodata sonification project?

Toussaint’s article sent me on a search for more and I found a website where you can get biodata sonification kits. Sam Cusumano’s electricity for progress website offers lessons, as well as, kits and more.

Sophie Haigney’s February 21, 2020 article for NPR ([US] National Public Radio) highlights other plant music and more ways to tune in to and create it. (h/t Kristin Toussaint)

SFU’s Philippe Pasquier speaks at “The rise of Creative AI and its ethics” online event on Tuesday, January 11, 2022 at 6 am PST

Simon Fraser University’s (SFU) Metacreation Lab for Creative AI (artificial intelligence) in Vancouver, Canada, has just sent me (via email) a January 2022 newsletter, which you can find here. There are a two items I found of special interest.

Max Planck Centre for Humans and Machines Seminars

From the January 2022 newsletter,

Max Planck Institute Seminar – The rise of Creative AI & its ethics
January 11, 2022 at 15:00 pm [sic] CET | 6:00 am PST

Next Monday [sic], Philippe Pasquier, director of the Metacreation Labn will
be providing a seminar titled “The rise of Creative AI & its ethics”
[Tuesday, January 11, 2022] at the Max Planck Institute’s Centre for Humans and
Machine [sic].

The Centre for Humans and Machines invites interested attendees to
our public seminars, which feature scientists from our institute and
experts from all over the world. Their seminars usually take 1 hour and
provide an opportunity to meet the speaker afterwards.

The seminar is openly accessible to the public via Webex Access, and
will be a great opportunity to connect with colleagues and friends of
the Lab on European and East Coast time. For more information and the
link, head to the Centre for Humans and Machines’ Seminars page linked
below.

Max Planck Institute – Upcoming Events

The Centre’s seminar description offers an abstract for the talk and a profile of Philippe Pasquier,

Creative AI is the subfield of artificial intelligence concerned with the partial or complete automation of creative tasks. In turn, creative tasks are those for which the notion of optimality is ill-defined. Unlike car driving, chess moves, jeopardy answers or literal translations, creative tasks are more subjective in nature. Creative AI approaches have been proposed and evaluated in virtually every creative domain: design, visual art, music, poetry, cooking, … These algorithms most often perform at human-competitive or superhuman levels for their precise task. Two main use of these algorithms have emerged that have implications on workflows reminiscent of the industrial revolution:

– Augmentation (a.k.a, computer-assisted creativity or co-creativity): a human operator interacts with the algorithm, often in the context of already existing creative software.

– Automation (computational creativity): the creative task is performed entirely by the algorithms without human intervention in the generation process.

Both usages will have deep implications for education and work in creative fields. Away from the fear of strong – sentient – AI, taking over the world: What are the implications of these ongoing developments for students, educators and professionals? How will Creative AI transform the way we create, as well as what we create?

Philippe Pasquier is a professor at Simon Fraser University’s School for Interactive Arts and Technology, where he directs the Metacreation Lab for Creative AI since 2008. Philippe leads a research-creation program centred around generative systems for creative tasks. As such, he is a scientist specialized in artificial intelligence, a multidisciplinary media artist, an educator, and a community builder. His contributions range from theoretical research on generative systems, computational creativity, multi-agent systems, machine learning, affective computing, and evaluation methodologies. This work is applied in the creative software industry as well as through artistic practice in computer music, interactive and generative art.

Interpreting soundscapes

Folks at the Metacreation Lab have made available an interactive search engine for sounds, from the January 2022 newsletter,

Audio Metaphor is an interactive search engine that transforms users’ queries into soundscapes interpreting them.  Using state of the art algorithms for sound retrieval, segmentation, background and foreground classification, AuMe offers a way to explore the vast open source library of sounds available on the  freesound.org online community through natural language and its semantic, symbolic, and metaphorical expressions. 

We’re excited to see Audio Metaphor included  among many other innovative projects on Freesound Labs, a directory of projects, hacks, apps, research and other initiatives that use content from Freesound or use the Freesound API. Take a minute to check out the variety of projects applying creative coding, machine learning, and many other techniques towards the exploration of sound and music creation, generative music, and soundscape composition in diverse forms an interfaces.

Explore AuMe and other FreeSound Labs projects    

The Audio Metaphor (AuMe) webpage on the Metacreation Lab website has a few more details about the search engine,

Audio Metaphor (AuMe) is a research project aimed at designing new methodologies and tools for sound design and composition practices in film, games, and sound art. Through this project, we have identified the processes involved in working with audio recordings in creative environments, addressing these in our research by implementing computational systems that can assist human operations.

We have successfully developed Audio Metaphor for the retrieval of audio file recommendations from natural language texts, and even used phrases generated automatically from Twitter to sonify the current state of Web 2.0. Another significant achievement of the project has been in the segmentation and classification of environmental audio with composition-specific categories, which were then applied in a generative system approach. This allows users to generate sound design simply by entering textual prompts.

As we direct Audio Metaphor further toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation. The project will continue to be instrumental in the design and implementation of new tools for sound designers and artists.

See more information on the website audiometaphor.ca.

As for Freesound Labs, you can find them here.

News from the Canadian Light Source (CLS), Canadian Science Policy Conference (CSPC) 2020, the International Symposium on Electronic Arts (ISEA) 2020, and HotPopRobot

I have some news about conserving art; early bird registration deadlines for two events, and, finally, an announcement about contest winners.

Canadian Light Source (CLS) and modern art

Rita Letendre. Victoire [Victory], 1961. Oil on canvas, Overall: 202.6 × 268 cm. Art Gallery of Ontario. Gift of Jessie and Percy Waxer, 1974, donated by the Ontario Heritage Foundation, 1988. © Rita Letendre L74.8. Photography by Ian Lefebvre

This is one of three pieces by Rita Letendre that underwent chemical mapping according to an August 5, 2020 CLS news release by Victoria Martinez (also received via email),

Research undertaken at the Canadian Light Source (CLS) at the University of Saskatchewan was key to understanding how to conserve experimental oil paintings by Rita Letendre, one of Canada’s most respected living abstract artists.

The work done at the CLS was part of a collaborative research project between the Art Gallery of Ontario (AGO) and the Canadian Conservation Institute (CCI) that came out of a recent retrospective Rita Letendre: Fire & Light at the AGO. During close examination, Meaghan Monaghan, paintings conservator from the Michael and Sonja Koerner Centre for Conservation, observed that several of Letendre’s oil paintings from the fifties and sixties had suffered significant degradation, most prominently, uneven gloss and patchiness, snowy crystalline structures coating the surface known as efflorescence, and cracking and lifting of the paint in several areas.

Kate Helwig, Senior Conservation Scientist at the Canadian Conservation Institute, says these problems are typical of mid-20th century oil paintings. “We focused on three of Rita Letendre’s paintings in the AGO collection, which made for a really nice case study of her work and also fits into the larger question of why oil paintings from that period tend to have degradation issues.”

Growing evidence indicates that paintings from this period have experienced these problems due to the combination of the experimental techniques many artists employed and the additives paint manufacturers had begun to use.

In order to determine more precisely how these factors affected Letendre’s paintings, the research team members applied a variety of analytical techniques, using microscopic samples taken from key points in the works.

“The work done at the CLS was particularly important because it allowed us to map the distribution of materials throughout a paint layer such as an impasto stroke,” Helwig said. The team used Mid-IR chemical mapping at the facility, which provides a map of different molecules in a small sample.

For example, chemical mapping at the CLS allowed the team to understand the distribution of the paint additive aluminum stearate throughout the paint layers of the painting Méduse. This painting showed areas of soft, incompletely dried paint, likely due to the high concentration and incomplete mixing of this additive. 

The painting Victoire had a crumbling base paint layer in some areas and cracking and efflorescence at the surface in others.  Infrared mapping at the CLS allowed the team to determine that excess free fatty acids in the paint were linked to both problems; where the fatty acids were found at the base they formed zing “soaps” which led to crumbling and cracking, and where they had moved to the surface they had crystallized, causing the snowflake-like efflorescence.

AGO curators and conservators interviewed Letendre to determine what was important to her in preserving and conserving her works, and she highlighted how important an even gloss across the surface was to her artworks, and the philosophical importance of the colour black in her paintings. These priorities guided conservation efforts, while the insights gained through scientific research will help maintain the works in the long term.

In order to restore the black paint to its intended even finish for display, conservator Meaghan Monaghan removed the white crystallization from the surface of Victoire, but it is possible that it could begin to recur. Understanding the processes that lead to this degradation will be an important tool to keep Letendre’s works in good condition.

“The world of modern paint research is complicated; each painting is unique, which is why it’s important to combine theoretical work on model paint systems with this kind of case study on actual works of art” said Helwig. The team hopes to collaborate on studying a larger cross section of Letendre’s paintings in oil and acrylic in the future to add to the body of knowledge.

Here’s a link to and a citation for the paper,

Rita Letendre’s Oil Paintings from the 1960s: The Effect of Artist’s Materials on Degradation Phenomena by Kate Helwig, Meaghan Monaghan, Jennifer Poulin, Eric J. Henderson & Maeve Moriarty. Studies in Conservation (2020): 1-15 DOI: https://doi.org/10.1080/00393630.2020.1773055 Published online: 06 Jun 2020

This paper is behind a paywall.

Canadian Science Policy Conference (CSPC) 2020

The latest news from the CSPC 2020 (November 16 – 20 with preconference events from Nov. 1 -14) organizers is that registration is open and early birds have a deadline of September 27, 2020 (from an August 6, 2020 CSPC 2020 announcement received via email),

It’s time! Registration for the 12th Canadian Science Policy Conference (CSPC 2020) is open now. Early Bird registration is valid until Sept. 27th [2020].

CSPC 2020 is coming to your offices and homes:

Register for full access to 3 weeks of programming of the biggest science and innovation policy forum of 2020 under the overarching theme: New Decade, New Realities: Hindsight, Insight, Foresight.

2500+ Participants

300+ Speakers from five continents

65+ Panel sessions, 15 pre conference sessions and symposiums

50+ On demand videos and interviews with the most prominent figures of science and innovation policy 

20+ Partner-hosted functions

15+ Networking sessions

15 Open mic sessions to discuss specific topics

The virtual conference features an exclusive array of offerings:

3D Lounge and Exhibit area

Advance access to the Science Policy Magazine, featuring insightful reflections from the frontier of science and policy innovation

Many more

Don’t miss this unique opportunity to engage in the most important discussions of science and innovation policy with insights from around the globe, just from your office, home desk, or your mobile phone.

Benefit from significantly reduced registration fees for an online conference with an option for discount for multiple ticket purchases

Register now to benefit from the Early Bird rate!

The preliminary programme can be found here. This year there will be some discussion of a Canadian synthetic biology roadmap, presentations on various Indigenous concerns (mostly health), a climate challenge presentation focusing on Mexico and social vulnerability and another on parallels between climate challenges and COVID-19. There are many presentations focused on COVID-19 and.or health.

There doesn’t seem to be much focus on cyber security and, given that we just lost two ice caps (see Brandon Spektor’s August 1, 2020 article [Two Canadian ice caps have completely vanished from the Arctic, NASA imagery shows] on the Live Science website), it’s surprising that there are no presentations concerning the Arctic.

International Symposium on Electronic Arts (ISEA) 2020

According to my latest information, the early bird rate for ISEA 2020 (Oct. 13 -18) ends on August 13, 2020. (My June 22, 2020 posting describes their plans for the online event.)

You can find registration information here.

Margaux Davoine has written up a teaser for the 2020 edition of ISEA in the form of an August 6, 2020 interview with Yan Breuleux. I’ve excerpted one bit,

Finally, thinking about this year’s theme [Why Sentience?], there might be something a bit ironic about exploring the notion of sentience (historically reserved for biological life, and quite a small subsection of it) through digital media and electronic arts. There’s been much work done in the past 25 years to loosen the boundaries between such distinctions: how do you imagine ISEA2020 helping in that?

The similarities shared between humans, animals, and machines are fundamental in cybernetic sciences. According to the founder of cybernetics Norbert Wiener, the main tenets of the information paradigm – the notion of feedback – can be applied to humans, animals as well as the material world. Famously, the AA predictor (as analysed by Peter Galison in 1994) can be read as a first attempt at human-machine fusion (otherwise known as a cyborg).

The infamous Turing test also tends to blur the lines between humans and machines, between language and informational systems. Second-order cybernetics are often associated with biologists Francisco Varela and Humberto Maturana. The very notion of autopoiesis (a system capable of maintaining a certain level of stability in an unstable environment) relates back to the concept of homeostasis formulated by Willam Ross [William Ross Ashby] in 1952. Moreover, the concept of “ecosystems” emanates directly from the field of second-order cybernetics, providing researchers with a clearer picture of the interdependencies between living and non-living organisms. In light of these theories, the absence of boundaries between animals, humans, and machines constitutes the foundation of the technosciences paradigm. New media, technological arts, virtual arts, etc., partake in the dialogue between humans and machines, and thus contribute to the prolongation of this paradigm. Frank Popper nearly called his book “Techno Art” instead of “Virtual Art”, in reference to technosciences (his editor suggested the name change). For artists in the technological arts community, Jakob von Uexkull’s notion of “human-animal milieu” is an essential reference. Also present in Simondon’s reflections on human environments (both natural and artificial), the notion of “milieu” is quite important in the discourses about art and the environment. Concordia University’s artistic community chose the concept of “milieu” as the rallying point of its research laboratories.

ISEA2020’s theme resonates particularly well with the recent eruption of processing and artificial intelligence technologies. For me, Sentience is a purely human and animal idea: machines can only simulate our ways of thinking and feeling. Partly in an effort to explore the illusion of sentience in computers, Louis-Philippe Rondeau, Benoît Melançon and I have established the Mimesis laboratory at NAD University. Processing and AI technologies are especially useful in the creation of “digital doubles”, “Vactors”, real-time avatar generation, Deep Fakes and new forms of personalised interactions.

I adhere to the epistemological position that the living world is immeasurable. Through their ability to simulate, machines can merely reduce complex logics to a point of understandability. The utopian notion of empathetic computers is an idea mostly explored by popular science-fiction movies. Nonetheless, research into computer sentience allows us to devise possible applications, explore notions of embodiment and agency, and thereby develop new forms of interaction. Beyond my own point of view, the idea that machines can somehow feel emotions gives artists and researchers the opportunity to experiment with certain findings from the fields of the cognitive sciences, computer sciences and interactive design. For example, in 2002 I was particularly marked by an immersive installation at Universal Exhibition in Neuchatel, Switzerland titled Ada: Intelligence Space. The installation comprised an artificial environment controlled by a computer, which interacted with the audience on the basis of artificial emotion. The system encouraged visitors to participate by intelligently analysing their movements and sounds. Another example, Louis-Philippe Demers’ Blind Robot (2012),  demonstrates how artists can be both critical of, and amazed by, these new forms of knowledge. Additionally, the 2016 BIAN (Biennale internationale d’art numérique), organized by ELEKTRA (Alain Thibault) explored the various ways these concepts were appropriated in installation and interactive art. The way I see it, current works of digital art operate as boundary objects. The varied usages and interpretations of a particular work of art allow it to be analyzed from nearly every angle or field of study. Thus, philosophers can ask themselves: how does a computer come to understand what being human really is?

I have yet to attend conferences or exchange with researchers on that subject. Although the sheer number of presentation propositions sent to ISEA2020, I have no doubt that the symposium will be the ideal context to reflect on the concept of Sentience and many issues raised therein.

For the last bit of news.

HotPopRobot, one of six global winners of 2020 NASA SpaceApps COVID-19 challenge

I last wrote about HotPopRobot’s (Artash and Arushi with a little support from their parents) response to the 2020 NASA (US National Aeronautics and Space Administration) SpaceApps challenge in my July 1, 2020 post, Toronto COVID-19 Lockdown Musical: a data sonification project from HotPopRobot. (You’ll find a video of the project embedded in the post.)

Here’s more news from HotPopRobot’s August 4, 2020 posting (Note: Links have been removed),

Artash (14 years) and Arushi (10 years). Toronto.

We are excited to become the global winners of the 2020 NASA SpaceApps COVID-19 Challenge from among 2,000 teams from 150 countries. The six Global Winners will be invited to visit a NASA Rocket Launch site to view a spacecraft launch along with the SpaceApps Organizing team once travel is deemed safe. They will also receive an invitation to present their projects to NASA, ESA [European Space Agency], JAXA [Japan Aerospace Exploration Agency], CNES [Centre National D’Etudes Spatiales; France], and CSA [Canadian Space Agency] personnel. https://covid19.spaceappschallenge.org/awards

15,000 participants joined together to submit over 1400 projects for the COVID-19 Global Challenge that was held on 30-31 May 2020. 40 teams made to the Global Finalists. Amongst them, 6 teams became the global winners!

The 2020 SpaceApps was an international collaboration between NASA, Canadian Space Agency, ESA, JAXA, CSA,[sic] and CNES focused on solving global challenges. During a period of 48 hours, participants from around the world were required to create virtual teams and solve any of the 12 challenges related to the COVID-19 pandemic posted on the SpaceApps website. More details about the 2020 SpaceApps COVID-19 Challenge:  https://sa-2019.s3.amazonaws.com/media/documents/Space_Apps_FAQ_COVID_.pdf

We have been participating in NASA Space Challenge for the last seven years since 2014. We were only 8 years and 5 years respectively when we participated in our very first SpaceApps 2014.

We have grown up learning more about space, tacking global challenges, making hardware and software projects, participating in meetings, networking with mentors and teams across the globe, and giving presentations through the annual NASA Space Apps Challenges. This is one challenge we look forward to every year.

It has been a fun and exciting journey meeting so many people and astronauts and visiting several fascinating places on the way! We hope more kids, youths, and families are inspired by our space journey. Space is for all and is yours to discover!

If you have the time, I recommend reading HotPopRobot’s August 4, 2020 posting in its entirety.