Tag Archives: SXSW

Chandra Sonifications (extraplanetary music and data sonification)

I’m not sure why the astronomy community is so taken with creating music out of data but it seems to be the most active of the science communities in the field. This October 15. 2023 article by Elizabeth Hlavinka for Salon.com provides a little context before describing some of the latest work, Note: Links have been removed,

Christine Malec, who has been blind since birth, has always been a big astronomy buff, fascinated by major questions about the universe like what happens when a limit reaches infinity and whether things like space travel could one day become a reality. However, throughout her childhood, most astronomical information was only accessible to her via space documentaries or science fiction books.

Nearly a decade ago, Malec discovered a completely new way to experience astronomy when she saw astronomer and musician Matt Russo, Ph.D., give a presentation at a local planetarium in Toronto. Using a process called astronomical sonification, Russo had translated information collected from the TRAPPIST-1 solar system, which has seven planets locked in an orbital resonance, into something people who are blind or have low vision could experience: music. 

Russo’s song sent a wave of goosebumps through Malec’s body. Something she had previously understood intellectually but never had turned into a sensory experience was suddenly, profoundly felt.

“It was unforgettable,” Malec told Salon in a phone interview. “I compare it to what it might be like for a sighted person to look up at the night sky and get a sensory intuition of the size and nature of the cosmos. As a blind person, that’s an experience I hadn’t had.”

Through astronomical sonification, scientists map complex astronomical structures like black holes or exploded stars through the similarly expansive and multidimensional world of sound. Translating data from outer space into music not only expands access to astronomy for people who are blind or have low vision, but it also has the potential to help all scientists better understand the universe by leading to novel discoveries. Like images from the James Webb telescope that contextualize our tiny place in the universe, astronomical sonification similarly holds the power to connect listeners to the cosmos.

“It really does bring a connection that you don’t necessarily get when you’re just looking at a cluster of galaxies that’s billions of light years away from you that stretches across many hundreds of millions of light years,” said Kimberly Kowal Arcand, Ph.D., a data visualizer for NASA’s Chandra X-ray Observatory. “Having sound as a way of experiencing that type of phenomenon, that type of object, whatever it is, is a very valid way of experiencing the world around you and of making meaning.”

Malec serves as a consultant for Chandra Sonifications, which translates complex data from astronomical objects into sound. One of their most popular productions, which has been listened to millions of times, sonified a black hole in the Perseus cluster galaxy about 240 million light-years away. When presenting this sonification at this year’s [2023] SXSW festival in March, Russo, who works with Chandra through an organization he founded called SYSTEM Sounds, said this eerie sound used to depict the black hole had been likened to “millions of damned souls being sucked into the pits of hell.” 

Here’s some of what the audience at the 2023 SXSW festival heard,

If you have the time , do read Hlavinka’s October 15. 2023 article as she tells a good story with many interesting tidbits such as this (Note: Links have been removed),

William “Bill” Kurth, Ph.D., a space physicist at the University of Iowa, said the origins of astronomical sonification can be traced back to at least the 1970s when the Voyager-1 spacecraft recorded electromagnetic wave signals in space that were sent back down to his team on Earth, where they were processed as audio recordings.

Back in 1979, the team plotted the recordings on a frequency-time spectrogram similar to a voiceprint you see on apps that chart sounds like birds chirping, Kurth explained. The sounds emitted a “whistling” effect created by waves following the magnetic fields of the planet rather than going in straight lines. The data seemed to confirm what they had suspected: lightning was shocking through Jupiter’s atmosphere.

“At that time, the existence of lightning anywhere other than in Earth’s atmosphere was unknown,” Kurth told Salon in a phone interview. “This became the first time that we realized that lightning might exist on another planet.”

And this (Note: Links have been removed),

Beyond astronomy, sonification can be applied to any of the sciences, and health researchers are currently looking at tonifying DNA strands to better understand how proteins fold in multiple dimensions. Chandra is also working on constructing tactile 3-D models of astronomical phenomena, which also expands access for people who are blind or have low vision — those who have historically only been able to experience these sciences through words, Malec said.

Chandra and other sonification projects

I found a brief and somewhat puzzling description of the Chandra sonification project on one of the of US National Aeronautics and Space Administration (NASA) websites. From a September 22, 2021 posting on the Marshall Science Research and Projects Division blog (Note: Links have been removed,)

On 9/16/21, a Chandra sonification image entitled “Jingle, Pluck, and Hum: Sounds from Space” was released to the public.  Since 2020, Chandra’s “sonification” project has transformed astronomical data from some of the world’s most powerful telescopes into sound.  Three new objects — a star-forming region, a supernova remnant, and a black hole at the center of a galaxy — are being released.  Each sonification has its own technique to translate the astronomical data into sound.

For more information visit: Data Sonifications: Westerlund 2 (Multiwavelength), Tycho’s Supernova Remnant, and M87. https://www.nasa.gov/missions_pages/chandra/main/index.html.

A Chandra article entitled “Data Sonification: Sounds from the Milky Way” was also released in the NASA STEM Newsletter.  This newsletter was sent to 54,951 subscribers and shared with the office of STEM engagements social media tools with approximately 1.7M followers. For more information visit: https://myemail.constantcontact.com/NASA-EXPRESS—-Your-STEM-Connection-for-Sept–9–2021.html?soid=1131598650811&aid=iXfzAJk6x_s

I’m a little puzzled by the reference to a Chandra sonification image but I’m assuming that they also produce data visualizations. Anyway, as Hlavinka notes Chandra is a NASA X-ray Observatory and they have a number of different projects/initiatives.

Getting back to data sonification, Chandra offers various audio files on its A Universe of Sound webpage,

Here’s a sampling of three data sonification posts (there are more) here,

Enjoy!

Quantum computing and more at SXSW (South by Southwest) 2018

It’s that time of year again. The entertainment conference such as South by South West (SXSW) is being held from March 9-18, 2018. The science portion of the conference can be found in the Intelligent Future sessions, from the description,

AI and new technologies embody the realm of possibilities where intelligence empowers and enables technology while sparking legitimate concerns about its uses. Highlighted Intelligent Future sessions include New Mobility and the Future of Our Cities, Mental Work: Moving Beyond Our Carbon Based Minds, Can We Create Consciousness in a Machine?, and more.

Intelligent Future Track sessions are held March 9-15 at the Fairmont.

Last year I focused on the conference sessions on robots, Hiroshi Ishiguro’s work, and artificial intelligence in a  March 27, 2017 posting. This year I’m featuring one of the conference’s quantum computing session, from a March 9, 2018 University of Texas at Austin news release  (also on EurekAlert),

Imagine a new kind of computer that can quickly solve problems that would stump even the world’s most powerful supercomputers. Quantum computers are fundamentally different. They can store information as not only just ones and zeros, but in all the shades of gray in-between. Several companies and government agencies are investing billions of dollars in the field of quantum information. But what will quantum computers be used for?

South by Southwest 2018 hosts a panel on March 10th [2018] called Quantum Computing: Science Fiction to Science Fact. Experts on quantum computing make up the panel, including Jerry Chow of IBM; Bo Ewald of D-Wave Systems; Andrew Fursman of 1QBit; and Antia Lamas-Linares of the Texas Advanced Computing Center at UT Austin.

Antia Lamas-Linares is a Research Associate in the High Performance Computing group at TACC. Her background is as an experimentalist with quantum computing systems, including work done with them at the Centre for Quantum Technologies in Singapore. She joins podcast host Jorge Salazar to talk about her South by Southwest panel and about some of her latest research on quantum information.

Lamas-Linares co-authored a study (doi: 10.1117/12.2290561) in the Proceedings of the SPIE, The International Society for Optical Engineering, that published in February of 2018. The study, “Secure Quantum Clock Synchronization,” proposed a protocol to verify and secure time synchronization of distant atomic clocks, such as those used for GPS signals in cell phone towers and other places. “It’s important work,” explained Lamas-Linares, “because people are worried about malicious parties messing with the channels of GPS. What James Troupe (Applied Research Laboratories, UT Austin) and I looked at was whether we can use techniques from quantum cryptography and quantum information to make something that is inherently unspoofable.”

Antia Lamas-Linares: The most important thing is that quantum technologies is a really exciting field. And it’s exciting in a fundamental sense. We don’t quite know what we’re going to get out of it. We know a few things, and that’s good enough to drive research. But the things we don’t know are much broader than the things we know, and it’s going to be really interesting. Keep your eyes open for this.

Quantum Computing: Science Fiction to Science Fact, March 10, 2018 | 11:00AM – 12:00PM, Fairmont Manchester EFG, SXSW 2018, Austin, TX.

If you look up the session, you will find,

Quantum Computing: Science Fiction to Science Fact

Quantum Computing: Science Fiction to Science Fact

Speakers

Bo Ewald

D-Wave Systems

Antia Lamas-Linares

Texas Advanced Computing Center at University of Texas

Startups and established players have sold 2000 Qubit systems, made freely available cloud access to quantum computer processors, and created large scale open source initiatives, all taking quantum computing from science fiction to science fact. Government labs and others like IBM, Microsoft, Google are developing software for quantum computers. What problems will be solved with this quantum leap in computing power that cannot be solved today with the world’s most powerful supercomputers?

[Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.]

Favorited by (1128)

View all

Primary Entry: Platinum Badge, Interactive Badge

Secondary Entry: Music Badge, Film Badge

Format: Panel

Event Type: Session

Track: Intelligent Future

Level: Intermediate

I wonder what ‘level’ means? I was not able to find an answer (quickly).

It’s was a bit surprising to find someone from D-Wave Systems (a Vancouver-based quantum computing based enterprise) at an entertainment conference. Still, it shouldn’t have been. Two other examples immediately come to mind, the TED (technology, entertainment, and design) conferences have been melding technology, if not science, with creative activities of all kinds for many years (TED 2018: The Age of Amazement, April 10 -14, 2018 in Vancouver [Canada]) and Beakerhead (2018 dates: Sept. 19 – 23) has been melding art, science, and engineering in a festival held in Calgary (Canada) since 2013. One comment about TED, it was held for several years in California (1984, 1990 – 2013) and moved to Vancouver in 2014.

For anyone wanting to browse the 2018 SxSW Intelligent Future sessions online, go here. or wanting to hear Antia Lamas-Linares talk about quantum computing, there’s the interview with Jorge Salazar (mentioned in the news release),

Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017

It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),

South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.

Lifelike robots

The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.

Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),

I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.

It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.

We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.

I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.

A SXSW attendee talks about robots with two robots.

If you have the time, do read McCracken’t piece in its entirety.

You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.

You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.

Artificial intelligence (AI)

In a March 15, 2017 EPFL press release by Hilary Sanctuary, scientist Marcel Salathé poses the question: Is Reliable Artificial Intelligence Possible?,

In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.

Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.

On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

h/t phys.org March 15, 2017 news item

As I noted earlier, these are not the kind of presentations you’d expect at an ‘entertainment’ festival.

‘Genius’ gamers develop mind-controlled skateboard

Chaotic Moon Labs, developer of the mind-controlled skateboard ‘Board of Imagination’, is a mobile games company where they continually inform you that they are geniuses/smarter than you are/etc. Clearly not a shy group of people nor believers of the ‘underpromise and overdeliver’ philosophy of business. They have recently announced (from a Feb. 26, 2012 news item by Nancy Owano on physorg.com) their latest project,

The Board of Imagination is a skateboard that carries the same Samsung tablet with Windows 8 and the same 800 watt electric motor as the earlier skateboard [Board of Awesomeness], but now sports a headset. With it, the board will read the rider’s mind and will move anywhere the rider imagines.

The skateboard can translate brain waves into action such that the user visualizes a point off in the distance and thinks about the speed in which to travel to get there. The skateboard does the rest.

This reminds me of B-Reel’s (a European advertising company) mind control project with toy racing cars (mentioned in my Oct. 6, 2011 posting) although this time it’s a much larger device. Here’s the YouTube-posted video produced by Chaotic Moon Labs,

I wonder if this Board of Imagination is going to be shown at the upcoming SXSW (South by SouthWest) shows which run from March 9 – 18, 2012 in Austin, Texas where this company (Chaotic Moon, the lab is their R&D [research and development] group) is located, according to Owano’s article.

An EPOC headset from Emotiv is being used as the mind reading device which somehow translates your brain waves into commands that your skateboard obeys. Emotiv and its sister company, Emotiv Lifesciences, by the way, were founded by Tan Le who gave a talk about her company and her work at TEDxWomen. The video is here, I’ve not had time to watch it yet. So if you get there before I do, please let me know what you think.