Tag Archives: University of Plymouth (UK)

It’s about sound: (1) Earable computing? (2) The end of the cacophony in hospitals?

I have two items, both concerning sound but in very different ways.

Phones in your ears

Researchers at the University of Illinois are working on smartphones you can put in your ears like you do an earbud. The work is in its very earliest stages as they are trying to establish a new field of research. There is a proposed timeline,

Caption: Earable computing timeline, according to SyNRG. Credit: Romit Roy Choudhury, The Grainger College of Engineering

Here’s more from a December 2, 2020 University of Illinois Grainger College of Engineering news release (also on EurekAlert but published on December 15, 2020),

CSL’s [Coordinated Science Laboratory] Systems and Networking Research Group (SyNRG) is defining a new sub-area of mobile technology that they call “earable computing.” The team believes that earphones will be the next significant milestone in wearable devices, and that new hardware, software, and apps will all run on this platform.

“The leap from today’s earphones to ‘earables’ would mimic the transformation that we had seen from basic phones to smartphones,” said Romit Roy Choudhury, professor in electrical and computer engineering (ECE). “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”

Instead, the group believes tomorrow’s earphones will continuously sense human behavior, run acoustic augmented reality, have Alexa and Siri whisper just-in-time information, track user motion and health, and offer seamless security, among many other capabilities.

The research questions that underlie earable computing draw from a wide range of fields, including sensing, signal processing, embedded systems, communications, and machine learning. The SyNRG team is on the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.

Computer science PhD student Zhijian Yang and other members of the SyNRG group, including his fellow students Yu-Lin Wei and Liz Li, are leading the way. They have published a series of papers in this area, starting with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earables research: facial motion sensing, acoustic augmented reality, and voice localization for earphones.

In Ear-AR: Indoor Acoustic Augmented Reality on Earphones, the group looks at how smart earphone sensors can track human movement, and, depending on the user’s location, play 3D sounds in the ear.

“If you want to find a store in a mall,” says Zhijian, “the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me.’ In your ears, the sound would appear to come from the direction in which you should walk, as if it’s a voice escort.”

The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones could sense facial and in-mouth activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones. Moreover, various medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is planning to look into analyzing facial muscle movements and emotions with earphone sensors.

The third publication, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to detect the direction of a sound. This means that if Alice and Bob are having a conversation, Bob’s earphones would be able to tune into the direction Alice’s voice is coming from.

“We’ve been working on mobile sensing and computing for 10 years,” said Wei. “We have a lot of experience to define this emerging landscape of earable computing.”

Haitham Hassanieh, assistant professor in ECE, is also involved in this research. The team has been funded by both NSF [US National Science Foundation] and NIH [National Institutes of Health], as well as companies like Nokia and Google. See more at the group’s Earable Computing website.

Noise hurts hospital caregivers and patients

A December 11, 2020 Canadian Broadcasting Corporation (CBC) article features one of the corporation’s Day 6 Radio programmes. This one was about proposed sound design in hospitals as imagined by musician Yoko Sen and, on a converging track, professor of applied psychology, Judy Edworthy,

As an ambient electronic musician, Yoko Sen spends much of her time building intricate, soothing soundscapes. 

But when she was hospitalized in 2012, she found herself immersed in a very different sound environment.

Already panicked by her health condition, she couldn’t tune out the harsh tones of the medical machinery in her hospital room.

Instead, she zeroed in on two machines — a patient monitor and a bed fall alarm. Their piercing tones had blended together to create a diminished fifth, a musical interval so offensive that it was banned in medieval churches.

Sen went on to start Sen Sound, a Washington, D.C.-based social enterprise dedicated to improving the sound of hospitals. 

‘Alarms are ignored, missed’

The volume of noise in today’s hospitals isn’t just unpleasant. It can also put patients’ health at risk.

According to Judy Edworthy, a professor of applied psychology at the University of Plymouth, the sheer number of alarms going off each day can spark a sort of auditory burnout among doctors and nurses.

“Alarms are ignored, missed, or generally just not paid attention to,” says Edworthy.

In a hospital environment that’s also inundated with announcements from overhead speakers, ringing phones, trolleys, and all other manner of insidious background sound, it can be difficult for staff to accurately locate and differentiate between the alarms.

Worse yet, in many hospitals, many of the alarms ringing out across the ward are false. Studies have shown that as many as 99 per cent of clinical alarms are inaccurate

The resulting problem has become so widespread that a term has been coined to describe it: alarm fatigue.

Raising the alarm

Sen’s company, launched in 2016, has partnered with hospitals and design incubators and even collaborates directly with medical device companies seeking to redesign their alarms.

Over the years, Sen has interviewed countless patients and hospital staff who share her frustration with noise. 

But when she first sat down with the engineers responsible for the devices’ design, she found that they tended to treat the sound of their devices as an “afterthought.” 

“When people first started to develop medical devices … people thought it was a good idea to have one or two sounds to demonstrate or to indicate when, let’s say for example, the patient’s temperature … exceeded some kind of range,” she [Edworthy] said.

“There wasn’t really any design put into this; it was just a sound that people thought would get your attention by being very loud and aversive and so on.”

Edworthy, who has spent decades studying medical alarm design, took things one step further this summer. In July, the International Standards Organization approved a new set of alarm designs, created by Edworthy, that mimic the natural hospital environment.

The standards, which are accepted by Health Canada, include an electronic heartbeat sound for alarms related to cardiac issues; and a rattling pillbox for drug administration. 

Her [Sen’s] team continues to work with companies to improve the sound of existing medical devices. But she has also begun to think more deeply about the long-term future of hospital sound — especially as it relates to the end-of-life experience.

“A study shows that hearing can be the last sense to go when we die,” says Sen. 

“It’s really beyond upsetting to think that many people end up dying in acute care hospitals and there are all these medical devices.”

As part of her interviews with patients, Sen has asked what sounds they would most like to hear at the end of their lives — and she discovered a common theme in their responses.

“I asked this question in many different countries, but they are all sounds that symbolize life,” said Sen. 

“Sounds of nature, sound of water, voices of loved ones. It’s all the sounds of life that people say they want to hear.”

As the pandemic continues to affect hospitals around the world, those efforts have taken on a new resonance — and Sen hopes the current crisis might serve as an opportunity to help usher in a more healing soundscape.

“My own health crisis almost gave me a new pathway in life,” she said.

There’s an embedded audio file from the Day 6 radio programme featuring this material on sound design and hospitals and there’s an embedded video of Sen delivery a talk about her work in the December 11, 2020 Canadian Broadcasting Corporation (CBC) article.

Sen and Edworthy give hope for a less noisy future and better care in hospitals for the living and the dying. Here’s a link to Sen Sound.

Live music by teleportation? Catch up. It’s already happened.

Dr. Alexis Kirke first graced this blog about four years ago, in a July 8, 2016 posting titled, Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin.

Kirke now returns with a study showing how teleportation helped to create a live performance piece, from a July 2, 2020 news item on ScienceDaily,

Teleportation is most commonly the stuff of science fiction and, for many, would conjure up the immortal phrase “Beam me up, Scotty.”

However, a new study has described how its status in science fact could actually be employed as another, and perhaps unlikely, form of entertainment — live music.

Dr Alexis Kirke, Senior Research Fellow in the Interdisciplinary Centre for Computer Music Research at the University of Plymouth (UK), has for the first time shown that a human musician can communicate directly with a quantum computer via teleportation.

The result is a high-tech jamming session, through which a blend of live human and computer-generated sounds come together to create a unique performance piece.

A July 2, 2020 Plymouth University press release (also on EurekAlert), which originated the news item, offers more detail about this latest work along with some information about the 2016 performance and how it all provides insight into how quantum computing might function in the future,

Speaking about the study, published in the current issue of the Journal of New Music Research, Dr Kirke said: “The world is racing to build the first practical and powerful quantum computers, and whoever succeeds first will have a scientific and military advantage because of the extreme computing power of these machines. This research shows for the first time that this much-vaunted advantage can also be helpful in the world of making and performing music. No other work has shown this previously in the arts, and it demonstrates that quantum power is something everyone can appreciate and enjoy.”

Quantum teleportation is the ability to instantaneously transmit quantum information over vast distances, with scientists having previously used it to send information from Earth to an orbiting satellite over 870 miles away.

In the current study, Dr Kirke describes how he used a system called MIq (Multi-Agent Interactive qgMuse), in which an IBM quantum computer executes a methodology called Grover’s Algorithm.

Discovered by Lov Grover at Bell Labs in 1996, it was the second main quantum algorithm (after Shor’s algorithm) and gave a huge advantage over traditional computing.

In this instance, it allows the dynamic solving of musical logical rules which, for example, could prevent dissonance or keep to ¾ instead of common time.

It is significantly faster than any classical computer algorithm, and Dr Kirke said that speed was essential because there is actually no way to transmit quantum information other than through teleportation.

The result was that when played the theme from Game of Thrones on the piano, the computer – a 14-qubit machine housed at IBM in Melbourne – rapidly generated accompanying music that was transmitted back in response.

Dr Kirke, who in 2016 staged the first ever duet between a live singer and a quantum supercomputer, said: “At the moment there are limits to how complex a real-time computer jamming system can be. The number of musical rules that a human improviser knows intuitively would simply take a computer too long to solve to real-time music. Shortcuts have been invented to speed up this process in rule-based AI music, but using the quantum computer speed-up has not be tried before. So while teleportation cannot move information faster than the speed of light, if remote collaborators want to connect up their quantum computers – which they are using to increase the speed of their musical AIs – it is 100% necessary. Quantum information simply cannot be transmitted using normal digital transmission systems.”

Caption: Dr Alexis Kirke (right) and soprano Juliette Pochin during the first duet between a live singer and a quantum supercomputer. Credit: University of Plymouth

Here’s a link to and a citation for the latest research,

Testing a hybrid hardware quantum multi-agent system architecture that utilizes the quantum speed advantage for interactive computer music by Alexis Kirke. Journal of New Music Research Volume 49, 2020 – Issue 3 Pages 209-230 DOI: https://doi.org/10.1080/09298215.2020.1749672 Published online: 13 Apr 2020

This paper appears to be open access.