Tag Archives: Australia

Metacrime: the line between the virtual and reality

An August 15, 2024 Griffith University (Australia) press release (also on EurekAlert) presents research on a relatively new type of crime, Note: A link has been removed,

If you thought your kids were away from harm playing multi-player games through VR headsets while in their own bedrooms, you may want to sit down to read this.

Griffith University’s Dr Ausma Bernot teamed up with researchers from Monash University, Charles Sturt University and University of Technology Sydney to investigate what has been termed as ‘metacrime’ – attacks, crimes or inappropriate activities that occur within virtual reality environments.

The ‘metaverse’ refers to the virtual world, where users of VR headsets can choose an avatar to represent themselves as they interact with other users’ avatars or move through other 3D digital spaces.

While the metaverse can be used for anything from meetings (where it will feel as though you are in the same room as avatars of other people instead of just seeing them on a screen) to wandering through national parks around the world without leaving your living room, gaming is by far its most popular use.   

Dr Bernot said the technology had evolved incredibly quickly.

“Using this technology is super fun and it’s really immersive,” she said.

“You can really lose yourself in those environments.

“Unfortunately, while those new environments are very exciting, they also have the potential to enable new crimes.

“While the headsets that enable us to have these experiences aren’t a commonly owned item yet, they’re growing in popularity and we’ve seen reports of sexual harassment or assault against both adults and kids.”

In a December 2023 report, the Australian eSafety Commissioner estimated around 680,000 adults in Australia are engaged in the metaverse.

This followed a survey conducted in November and December 2022 by researchers from the UK’s Center for Countering Digital Hate, who conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat.

The researchers found most users had been faced with at least one negative experience in the virtual environment, including being called offensive names, receiving repeated unwanted messages or contact, being provoked to respond to something or to start an argument, being challenged about cultural identity or being sent unwanted inappropriate content.

Eleven per cent had been exposed to a sexually graphic virtual space and nine per cent had been touched (virtually) in a way they didn’t like.

Of these respondents, 49 per cent said the experience had a moderate to extreme impact on their mental or emotional wellbeing.

With the two largest user groups being minors and men, Dr Bernot said it was important for parents to monitor their children’s activity or consider limiting their access to multi-player games.

“Minors are more vulnerable to grooming and other abuse,” she said.

“They may not know how to deal with these situations, and while there are some features like a ‘safety bubble’ within some games, or of course the simple ability to just take the headset off, once immersed in these environments it does feel very real.

“It’s somewhere in between a physical attack and for example, a social media harassment message – you’ll still feel that distress and it can take a significant toll on a user’s wellbeing.

“It is a real and palpable risk.”

Monash University’s You Zhou said there had already been many reports of virtual rape, including one in the United Kingdom where police have launched a case for a 16-year-old girl whose avatar was attacked, causing psychological and emotional trauma similar to an attack in the physical world.

“Before the emergence of the metaverse we could not have imagined how rape could be virtual,” Mr Zhou said.

“When immersed in this world of virtual reality, and particularly when using higher quality VR headsets, users will not necessarily stop to consider whether the experience is reality or virtuality.

“While there may not be physical contact, victims – mostly young girls – strongly claim the feeling of victimisation was real.

“Without physical signs on a body, and unless the interaction was recorded, it can be almost impossible to show evidence of these experiences.”

With use of the metaverse expected to grow exponentially in coming years, the research team’s findings highlight a need for metaverse companies to instil clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.

Here’s a link to and a citation for the paper,

Metacrime and Cybercrime: Exploring the Convergence and Divergence in Digital Criminality by You Zhou, Milind Tiwari, Ausma Bernot & Kai Lin. Asian Journal of Criminology 19, 419–439 (2024) DOI: https://doi.org/10.1007/s11417-024-09436-y Published online: 09 August 2024 Issue Date: September 2024

This paper is open access.

Soundscapes comprised of underground acoustics can help amplify soil health

For anyone who doesn’t like cartoons, this looks a lot cuter than the information it conveys,

An August 16, 2024 news item on ScienceDaily announces the work,

Barely audible to human ears, healthy soils produce a cacophony of sounds in many forms—a bit like an underground rave concert of bubble pops and clicks.

Special recordings made by Flinders University ecologists in Australia show that this chaotic mixture of soundscapes can be a measure of the diversity of tiny living animals in the soil, which create sounds as they move and interact with their environment.

An August 16, 2024 Flinders University press release (also on EurekAlert), which originated the news item, describes a newish (more about newish later) field of research ‘eco-acoustics’ and technical details about the researchers’ work, Note: A link has been removed,

With 75% of the world’s soils degraded, the future of the teeming community of living species that live underground face a dire future without restoration, says microbial ecologist Dr Jake Robinson, from the Frontiers of Restoration Ecology Lab in the College of Science and Engineering at Flinders University.

This new field of research aims to investigate the vast, teeming hidden ecosystems where almost 60% of the Earth’s species live, he says.

“Restoring and monitoring soil biodiversity has never been more important.

“Although still in its early stages, ‘eco-acoustics’ is emerging as a promising tool to detect and monitor soil biodiversity and has now been used in Australian bushland and other ecosystems in the UK.

“The acoustic complexity and diversity are significantly higher in revegetated and remnant plots than in cleared plots, both in-situ and in sound attenuation chambers.

“The acoustic complexity and diversity are also significantly associated with soil invertebrate abundance and richness.”

The latest study, including Flinders University expert Associate Professor Martin Breed and Professor Xin Sun from the Chinese Academy of Sciences, compared results from acoustic monitoring of remnant vegetation to degraded plots and land that was revegetated 15 years ago. 

The passive acoustic monitoring used various tools and indices to measure soil biodiversity over five days in the Mount Bold region in the Adelaide Hills in South Australia. A below-ground sampling device and sound attenuation chamber were used to record soil invertebrate communities, which were also manually counted.   

“It’s clear acoustic complexity and diversity of our samples are associated with soil invertebrate abundance – from earthworms, beetles to ants and spiders – and it seems to be a clear reflection of soil health,” says Dr Robinson.

“All living organisms produce sounds, and our preliminary results suggest different soil organisms make different sound profiles depending on their activity, shape, appendages and size.

“This technology holds promise in addressing the global need for more effective soil biodiversity monitoring methods to protect our planet’s most diverse ecosystems.”

This is a copy of the research paper’s graphical abstract,

Caption: Acoustic monitoring was carried out on soil in remnant vegetation as well as degraded plots and land that was revegetated 15 years ago. Credit: Flinders University

Here’s a link to and a citation for the paper,

Sounds of the underground reflect soil biodiversity dynamics across a grassy woodland restoration chronosequence by Jake M. Robinson, Alex Taylor, Nicole Fickling, Xin Sun, Martin F. Breed. Journal of Applied Ecology Volume 61, Issue 9 September 2024 Pages 2047-2060 DOI: https://doi.org/10.1111/1365-2664.14738 First published online: 15 August 2024

This paper is open access.

‘Newish’ eco-acoustics

Like a lot of newish scientific terms, eco-acoustics, appears to be evolving. A search for the term led me to the Acoustic ecology entry on Wikipedia, Note: Links have been removed,

Acoustic ecology, sometimes called ecoacoustics or soundscape studies, is a discipline studying the relationship, mediated through sound, between human beings and their environment.[1] Acoustic ecology studies started in the late 1960s with R. Murray Schafer a musician, composer and former professor of communication studies at Simon Fraser University (Vancouver, British Columbia, Canada) with the help of his team there[2] as part of the World Soundscape Project. The original WSP team included Barry Truax and Hildegard Westerkamp, Bruce Davies and Peter Huse, among others. The first study produced by the WSP was titled The Vancouver Soundscape. This innovative study raised the interest of researchers and artists worldwide, creating enormous growth in the field of acoustic ecology. In 1993, the members of the by now large and active international acoustic ecology community formed the World Forum for Acoustic Ecology.[3]

Soundscapes are composed of the anthrophony, geophony and biophony of a particular environment. They are specific to location and change over time.[12] Acoustic ecology aims to study the relationship between these things, i.e. the relationship between humans, animals and nature, within these soundscapes. These relationships are delicate and subject to disruption by natural or man-made means.[9]

The acoustic niche hypothesis, as proposed by acoustic ecologist Bernie Krause in 1993,[23] refers to the process in which organisms partition the acoustic domain, finding their own niche in frequency and/or time in order to communicate without competition from other species. The theory draws from the ideas of niche differentiation and can be used to predict differences between young and mature ecosystems. Similar to how interspecific competition can place limits on the number of coexisting species that can utilize a given availability of habitats or resources, the available acoustic space in an environment is a limited resource that is partitioned among those species competing to utilize it.[24]

In mature ecosystems, species will sing at unique bandwidths and specific times, displaying a lack of interspecies competition in the acoustic environment. Conversely, in young ecosystems, one is more likely to encounter multiple species using similar frequency bandwidths, which can result in interference between their respective calls, or a complete lack of activity in uncontested bandwidths. Biological invasions can also result in interference in the acoustic niche, with non-native species altering the dynamics of the native community by producing signals that mask or degrade native signals. This can cause a variety of ecological impacts, such as decreased reproduction, aggressive interactions, and altered predator-prey dynamics.[25] The degree of partitioning in an environment can be used to indicate ecosystem health and biodiversity.

Earlier bioacoustic research at Flinders University has been mentioned in a June 14, 2023 posting “The sound of dirt.” Finally, whether you spell it eco-acoustics or ecoacoustics or call it acoustic ecology, it is a fascinating way of understanding the natural and not-so-natural world we live in.

Regenerate damaged skin, cartilage, and bone with help from silkworms?

A July 24, 2024 news item on phys.org highlights research into regenerating bone and skin, Note: A link has been removed,

Researchers are exploring new nature-based solutions to stimulate skin and bone repair.

In the cities of Trento and Rovereto in northern Italy and Bangkok in Thailand, scientists are busy rearing silkworms in nurseries. They’re hoping that the caterpillars’ silk can regenerate human tissue. For such a delicate medical procedure, only thoroughbreds will do.

“By changing the silkworm, you can change the chemistry,” said Professor Antonella Motta, a researcher in bioengineering at the University of Trento in Italy. That could, in turn, affect clinical outcomes. “This means the quality control should be very strict.”

Silk has been used in surgical sutures for hundreds of years and is now emerging as a promising nature-based option for triggering human tissue to self-regenerate. Researchers are also studying crab, shrimp and mussel shells and squid skin and bone for methods of restoring skin, bone and cartilage. This is particularly relevant as populations age.

A July 23, 2024 article by Gareth Willmer for Horizon Magazine, the EU (European Union) research & innovation magazine, which originated the news item, provides more details,

‘Tissue engineering is a new strategy to solve problems caused by pathologies or trauma to the organs, as an alternative to transplants or artificial device implantations,’ said Motta, noting that these interventions can often fail or expire. ‘The idea is to use the natural ability of our bodies to rebuild the tissue.’

The research forms part of the five-year EU-funded SHIFT [Shaping Innovative Designs for Sustainable Tissue Engineering Products] project that Motta coordinates, which includes universities in Europe, as well as partners in Asia and Australia. Running until 2026, the research team aim to scale up methods for regenerating skin, bone and cartilage using bio-based polymers and to get them ready for clinical trials. The goal is to make them capable of repairing larger wounds and tissue damage.

The research builds on work carried out under the earlier REMIX [Regenerative Medicine Innovation Crossing – Research and Innovation Staff Exchange in Regenerative Medicine] project, also funded by the EU, which made important advances in understanding the different ways in which these biomaterials could be used. 

Building a scaffold

Silk, for instance, can be used to form a “scaffold” in damaged tissue that then activates cells to form new tissue and blood vessels. The process could be used to treat conditions such as diabetic ulcers and lower back pain caused by spinal disc degeneration. The SHIFT team have been exploring minimally invasive procedures for treatment, such as hydrogels that can be applied directly to the skin, or injected into bone or cartilage.

The approaches using both silkworms and some of the marine organisms have great potential, said Motta. 

‘We have three or four systems with different materials that are really promising,’ she said. By the end of SHIFT, the goal is to have two or three prototypes that can be developed together with start-up and spin-off companies created in collaboration with the project. 

One of the principles of the SHIFT team has been been exploring how best to harness the concept of a circular economy. For example, they are looking into how waste products from the textile and food industries can be reused in these treatments.

Yet with complicated interactions at a microscale, and the need to prevent the body from rejecting foreign materials, such tissue engineering is a big challenge. 

‘The complexity is high because the nature of biology is not easy,’ said Motta. ‘We cannot change the language of the cells, but instead have to learn to speak the same language as them.’

But she firmly believes the nature-based rather than synthetic approach is the way to go and thinks treatments harnessing SHIFT’s methods could become available in the early 2030s. 

‘I believe in this approach,’ said Motta. ‘Bone designed by nature is the best bone we can have.’

Skin care

Another EU-funded project known as SkinTERM [Skin Tissue Engineering and Regenerative Medicine: From skin repair to regeneration], which runs for almost five years until mid-2025, is also looking at novel ways to get tissue to self-regenerate, focusing on skin. To treat burns and other surface wounds today, a thin layer of skin is sometimes grafted from another part of the body. This can cause the appearance of disfiguring scars and the patient’s mobility may be impacted when the tissue contracts as it heals. Current skin-grafting methods can also be painful.

The SkinTERM team are therefore investigating how inducing the healing process in the networks of cells surrounding a wound might enable skin to repair itself. 

‘We could do much better if we move towards regeneration,’ said Dr Willeke Daamen, who coordinates SkinTERM as a researcher in soft tissue regeneration at Radboud University in Nijmegen, the Netherlands. ‘The ultimate goal would be to get the same situation before and after being wounded.’

Researchers are studying a particular mammal – the spiny mouse – which has a remarkable ability to heal without scarring. It is able to self-repair damage to other tissues like the heart and spinal cord too. This is also true of early foetal skin.

The team are examining these systems to learn more about how they work and the processes occurring in the area around cells, known as the extracellular matrix. They hope to identify factors that might have a role in the regenerative process, and test how it might be induced in humans. 

Kick-start

‘We’ve been trying to learn from those systems on how to kick-start such processes,’ said Daamen. ‘We’ve made progress in what kinds of compounds seem at least in part to be responsible for a regenerative response.’

Many lines of research are being carried out among a new generation of multidisciplinary scientists being trained in this area, and a lot has already been achieved, said Daamen.

They have managed to create scaffolds using different components related to skin regeneration, such as the proteins collagen and elastin. They have also collected a vast amount of data on genes and proteins with potential roles in regeneration. Their role will be further tested by using them on scar-prone cells cultured on collagen scaffolds.

‘The mechanisms are complex,’ said Dr Bouke Boekema, a senior researcher at the Association of Dutch Burn Centres in Beverwijk, the Netherlands, and vice-coordinator of SkinTERM. 

‘If you find a mechanism, the idea is that maybe you can tune it so that you can stimulate it. But there’s not necessarily one magic bullet.’

By the end of the project next year, Boekema hopes the research could result in some medical biomaterial options to test for clinical use. ‘It would be nice if several prototypes were available for testing to see if they improve outcomes in patients.’

Research in this article was funded by the Marie Skłodowska-Curie Actions (MSCA). The views of the interviewees don’t necessarily reflect those of the European Commission. If you liked this article, please consider sharing it on social media.

Interesting. Over these last few months, I’ve been stumbling across more than my usual number of regenerative medicine stories.

Highlights from Simon Fraser University’s (SFU) June 2024 Metacreation Lab newsletter

The latest newsletter from the Metacreation Lab for Creative AI (at Simon Fraser University [SFU]), features a ‘first’. From the June 2024 Metacreation Lab newsletter (received via email),

“Longing + Forgetting” at the 2024 Currents New Media Festival in Santa Fe

We are thrilled to announce that Longing + Forgetting has been invited to the esteemed Currents New Media Festival in Santa Fe, New Mexico. Longing + Forgetting is a generative audio-video installation that explores the relationship between humans and machines. This media art project, created by Canadian artists Philippe Pasquier and Thecla Schiphorst alongside Australian artist Matt Gingold, has garnered international acclaim since its inception. Initially presented in Canada in 2013, the piece has journeyed through multiple international festivals, captivating audiences with its exploration of human expression through movement.

Philippe Pasquier will be on-site for the festival, overseeing the site-specific installation at El Museo Cultural de Santa Fe. This marks the North American premiere of the redeveloped version of “Longing + Forgetting,” featuring a new soundtrack by Pasquier based solely on the close-mic recording of dancers.

Currents New Media Festival runs June 14–23, 2024 and brings together the work of established and emerging new media artists from around the world across various disciplines, with an expected 9,000 visitors during the festival’s run.

More Information

Discover “Longing + Forgetting” at Bunjil Place in Melbourne

We are excited to announce that “Longing + Forgetting” is being featured at Bunjil Place in Melbourne, Australia. As part of the Art After Dark Program curated by Angela Barnett, this outdoor screening will run from June 1 to June 28, illuminating the night from 5 pm to 7 pm.

More Information

Presenting “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with GANs” at SIGGRAPH 2024

We are pleased to share that our paper, “Unveiling New Artistic Dimensions in Calligraphic Arabic Script with Generative Adversarial Networks,” will be presented at SIGGRAPH 2024, the premier conference on computer graphics and interactive techniques. The event will take place from July 28 to August 1, 2024, in Denver, Colorado.

This paper delves into the artistic potential of Generative Adversarial Networks (GANs) to create and innovate within the realm of calligraphic Arabic script, particularly the nastaliq style. By developing two custom datasets and leveraging the StyleGAN2-ada architecture, we have generated high-quality, stylistically coherent calligraphic samples. Our work bridges the gap between traditional calligraphy and modern technology and offers a new mode of creative expression for this artform.

SIGGRAPH’24

For those unfamiliar with the acronym, SIGGRAPH stands for special interest group for computer graphics and interactive techniques. SIGGRAPH is huge and it’s a special interest group (SIG) of the ACM (Association for Computing Machinery).

If memory serves, this is the first time I’ve seen the Metacreation Lab make a request for volunteers, from the June 2024 Metacreation Lab newsletter,

Are you interested in music-making and AI technology?

The Metacreation Lab for Creative AI at Simon Fraser University (SFU), is conducting a research study in partnership with Steinberg Media Technologies GmbH. We are testing and evaluating MMM-Cubase v2, a creative AI system for assisting composing music. The system is based on our best music transformer, the multitrack music machine (MMM), which can generate, re-generate or complete new musical content based on existing content.

There is no prerequisite for this study beyond a basic knowledge of DAW and MIDI. So everyone is welcome even if you do not consider yourself a composer, but are interested in trying the system. The entire study should take you around 3 hours, and you must be 19+ years old. Basic interest and familiarity with digital music composition will help, but no experience with making music is required.

We seek to better evaluate the potential for adoption of such systems for novice/beginner as well as for seasoned composers. More specifically, you will be asked to install and use the system to compose a short 4-track musical composition and to fill out a survey questionnaire at the end.

Participation in this study is rewarded with one free Steinberg software license of your choice among Cubase Element, Dorico Element or Wavelab Element.

For any question or further inquiry, please contact researcher Renaud Bougueng Tchemeube directly at rbouguen@sfu.ca.

Enroll in the Study

You can find the Metacreation Lab for Creative AI website here.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

The sound of dirt

So you don’t get your hopes up, this acoustic story doesn’t offer any accompanying audio/acoustic files, i.e., I couldn’t find the sound of dirt.

In any event, there’s still an interesting story in an April 10, 2023 news item on phys.org,

U.K. and Australian ecologists have used audio technology to record different types of sounds in the soils of a degraded and restored forest to indicate the health of ecosystems.

Non-invasive acoustic monitoring has great potential for scientists to gather long-term information on species and their abundance, says Flinders University [Australia] researcher Dr. Jake Robinson, who conducted the study while at the University of Sheffield in England.

Photo: Pixabay

An April 8, 2023 Flinders University press release, which originated the news item, delves into the researcher’s work, Note: Links have been removed,

“Eco-acoustics can measure the health lf landscapes affected by farming, mining and deforestation but can also monitor their recovery following revegetation,” he says.  

“From earthworms and plant roots to shifting soils and other underground activity, these subtle sounds were stronger and more diverse in healthy soils – once background noise was blocked out.”   

The subterranean study used special microphones to collect almost 200 sound samples, each about three minutes long, from soil samples collected in restored and cleared forests in South Yorkshire, England. 

“Like underwater and above-ground acoustic monitoring, below-ground biodiversity monitoring using eco-acoustics has great potential,” says Flinders University co-author, Associate Professor Martin Breed. 

Since joining Flinders University, Dr Robinson has released his first book, entitled Invisible Friends (DOI: 10.53061/NZYJ2969) [emphasis mine], which covers his core research into ‘how microbes in the environment shape our lives and the world around us’. 

Now a researcher in restoration genomics at the College of Science and Engineering at Flinders University, the new book examines the powerful role invisible microbes play in ecology, immunology, psychology, forensics and even architecture.  

“Instead of considering microbes the bane of our life, as we have done during the global pandemic, we should appreciate the many benefits they bring in keeping plants animals, and ourselves, alive.”  

In another new article, Dr Robinson and colleagues call for a return to ‘nature play’ for children [emphasis mine] to expose their developing immune systems to a diverse array of microbes at a young age for better long-term health outcomes. 

“Early childhood settings should optimise both outdoor and indoor environments for enhanced exposure to diverse microbiomes for social, cognitive and physiological health,” the researchers say.  

“It’s important to remember that healthy soils feed the air with these diverse microbes,” Dr Robinson adds.  

It seems Robinson has gone on a publicity blitz, academic style, for his book. There’s a May 22, 2023 essay by Robinson, Carlos Abrahams (Senior Lecturer in Environmental Biology – Director of Bioacoustics, Nottingham Trent University); and Martin Breed (Associate Professor in Biology, Flinders University) on the Conversation, Note: A link has been removed,

Nurturing a forest ecosystem back to life after it’s been logged is not always easy.

It can take a lot of hard work and careful monitoring to ensure biodiversity thrives again. But monitoring biodiversity can be costly, intrusive and resource-intensive. That’s where ecological acoustic survey methods, or “ecoacoustics”, come into play.

Indeed, the planet sings. Think of birds calling, bats echolocating, tree leaves fluttering in the breeze, frogs croaking and bush crickets stridulating. We live in a euphonious theatre of life.

Even the creatures in the soil beneath our feet emit unique vibrations as they navigate through the earth to commute, hunt, feed and mate.

Robinson has published three papers within five months of each other, in addition to the book, which seems like heavy output to me.

First, here’s a link to and a citation for the education paper,

Optimising Early Childhood Educational Settings for Health Using Nature-Based Solutions: The Microbiome Aspect by Jake M. Robinson and Alexia Barrable. Educ. Sci. 2023, 13 (2), 211 DOI: https://doi.org/10.3390/educsci13020211
Published: 16 February 2023

This is an open access paper.

For these two links and citations, the articles seem to be very closely linked.,

The sound of restored soil: Measuring soil biodiversity in a forest restoration chronosequence with ecoacoustics by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. doi: https://doi.org/10.1101/2023.01.23.525240 Posted January 23, 2023

The sound of restored soil: using ecoacoustics to measure soil biodiversity in a temperate forest restoration context by Jake M. Robinson, Martin F. Breed, Carlos Abrahams. Restoration Ecology, Online Version of Record before inclusion in an issue e13934 DOI: https://doi.org/10.1111/rec.13934 First published: 22 May 2023

Both links lead to open access papers.

Finally, there’s the book,

Invisible Friends; How Microbes Shape Our Lives and the World Around Us by Jake Robinson. Pelagic Publishing, 2022. ISBN 9781784274337 DOI: 10.53061/NZYJ2969

This you have to pay for.

For those would would like to hear something from nature, I have a May 27, 2022 posting, The sound of the mushroom. Enjoy!

Mind-controlled robots based on graphene: an Australian research story

As they keep saying these days, ‘it’s not science fiction anymore’.

It’s so fascinating I almost forgot what it’s like to make a video where it can take hours to get a few minutes (the video is a little over 3 mins.) and all the failures are edited out. Plus, I haven’t found any information about training both the human users and the robotic dogs/quadrupeds. Does it take minutes? hours? days? more? Can you work with any old robotic dog /quadruped or does it have to be the one you’ve ‘gotten to know’? Etc. Bottom line: I don’t know if I can take what I see in the video at face value.

A March 20, 2023 news item on Nanowerk announces the work from Australia,

The advanced brain-computer interface [BCI] was developed by Distinguished Professor Chin-Teng Lin and Professor Francesca Iacopi, from the UTS [University of Technology Sydney; Australia] Faculty of Engineering and IT, in collaboration with the Australian Army and Defence Innovation Hub.

As well as defence applications, the technology has significant potential in fields such as advanced manufacturing, aerospace and healthcare – for example allowing people with a disability to control a wheelchair or operate prosthetics.

“The hands-free, voice-free technology works outside laboratory settings, anytime, anywhere. It makes interfaces such as consoles, keyboards, touchscreens and hand-gesture recognition redundant,” said Professor Iacopi.

A March 20, 2023 University of Technology Sydney (UTS) press release, also on EurekAlert but published March 19, 2023, which originated the news item, describes the interface in more detail,

“By using cutting edge graphene material, combined with silicon, we were able to overcome issues of corrosion, durability and skin contact resistance, to develop the wearable dry sensors,” she said.

A new study outlining the technology has just been published in the peer-reviewed journal ACS Applied Nano Materials. It shows that the graphene sensors developed at UTS are very conductive, easy to use and robust.

The hexagon patterned sensors are positioned over the back of the scalp, to detect brainwaves from the visual cortex. The sensors are resilient to harsh conditions so they can be used in extreme operating environments.

The user wears a head-mounted augmented reality lens which displays white flickering squares. By concentrating on a particular square, the brainwaves of the operator are picked up by the biosensor, and a decoder translates the signal into commands.

The technology was recently demonstrated by the Australian Army, where soldiers operated a Ghost Robotics quadruped robot using the brain-machine interface [BMI]. The device allowed hands-free command of the robotic dog with up to 94% accuracy.

“Our technology can issue at least nine commands in two seconds. This means we have nine different kinds of commands and the operator can select one from those nine within that time period,” Professor Lin said.

“We have also explored how to minimise noise from the body and environment to get a clearer signal from an operator’s brain,” he said.

The researchers believe the technology will be of interest to the scientific community, industry and government, and hope to continue making advances in brain-computer interface systems.

Here’s a link to and a citation for the paper,

Noninvasive Sensors for Brain–Machine Interfaces Based on Micropatterned Epitaxial Graphene by Shaikh Nayeem Faisal, Tien-Thong Nguyen Do, Tasauf Torzo, Daniel Leong, Aiswarya Pradeepkumar, Chin-Teng Lin, and Francesca Iacopi. ACS Appl. Nano Mater. 2023, 6, 7, 5440–5447 DOI: https://doi.org/10.1021/acsanm.2c05546 Publication Date: March 16, 2023 Copyright © 2023 The Authors. Published by American Chemical Society

This paper is open access.

Comments

For anyone who’s bothered by this, the terminology is fluid. Sometimes you’ll see brain-computer interface (BCI), sometimes you’ll see human-computer interface, or brain-machine interface (BMI) and, as I’ve now found in the video although I notice the Australians are not hyphenating it, brain-robotic interface (BRI).

You can find Ghost Robotics here, the makers of the robotic ‘dog’.

There seems to be a movement to replace the word ‘soldiers’ with warfighters and, according to this video, military practitioners. I wonder how medical doctors and other practitioners feel about the use of ‘practitioners’ in a military context.

Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023

The Canadian Science Policy Centre (CSPC) sent a May 11, 2023 notice (via email) about an upcoming event but first, congratulations (Bravo!) are in order,

The Science Meets Parliament [SMP] Program 2023 is now complete and was a huge success. 43 Delegates from across Canada met with 62 Parliamentarians from across the political spectrum on the Hill on May 1-2, 2023.

The SMP Program is championed by CSPC and Canada’s Chief Science Advisor, Dr. Mona Nemer [through the Office of the Chief Science Advisor {OCSA}].

This Program would not have been possible without the generous support of our sponsors: The Royal Military College of Canada, The Stem Cell Network, and the University of British Columbia.

There are 443 seats in Canada’s Parliament with 338 in the House of Commons and 105 in the Senate and 2023 is the third time the SMP programme has been offered. (It was previously held in 2018 and 2022 according to the SMP program page.)

The Canadian programme is relatively new compared to Australia where they’ve had a Science Meets Parliament programme since 1999 (according to a March 20, 2017 essay by Ken Baldwin, Director of Energy Change Institute at Australian National University for The Conversation). The Scottish have had a Science and the Parliament programme since 2000 (according to this 2022 event notice on the Royal Society of Chemistry’s website).

By comparison to the other two, the Canadian programme is a toddler. (We tend not to recognize walking for the major achievement it is.) So, bravo to the CSPC and OCSA on getting 62 Parliamentarians to make time in their schedules to meet a scientist.

Responsible neurotechnology innovation?

From the Canadian Strategies for Responsible Neurotechnology Innovation event page on the CSPC website,

Advances in neurotechnology are redefining the possibilities of improving neurologic health and mental wellbeing, but related ethical, legal, and societal concerns such as privacy of brain data, manipulation of personal autonomy and agency, and non-medical and dual uses are increasingly pressing concerns [emphasis mine]. In this regard, neurotechnology presents challenges not only to Canada’s federal and provincial health care systems, but to existing laws and regulations that govern responsible innovation. In December 2019, just before the pandemic, the OECD [Organisation for Economic Cooperation and Development] Council adopted a Recommendation on Responsible Innovation in Neurotechnology. It is now urging that member states develop right-fit implementation strategies.

What should these strategies look like for Canada? We will propose and discuss opportunities that balance and leverage different professional and governance approaches towards the goal of achieving responsible innovation for the current state of the art, science, engineering, and policy, and in anticipation of the rapid and vast capabilities expected for neurotechnology in the future by and for this country.

Link to the full OECD Recommendation on Responsible Innovation in Neurotechnology

Date: May 16 [2023]

Time: 12:00 pm – 1:30 pm EDT

Event Category: Virtual Session [on Zoom]

Registration Page: https://us02web.zoom.us/webinar/register/WN_-g8d1qubRhumPSCQi6WUtA

The panelists are:

Dr. Graeme Moffat
Neurotechnology entrepreneur & Senior Fellow, Munk School of Global Affairs & Public Policy [University of Toronto]

Dr. Graeme Moffat is a co-founder and scientist with System2 Neurotechnology. He previously was Chief Scientist and VP of Regulatory Affairs at Interaxon, Chief Scientist with ScienceScape (later Chan-Zuckerberg Meta), and a research engineer at Neurelec (a division of Oticon Medical). He served as Managing Editor of Frontiers in Neuroscience, the largest open access scholarly journal series in the field of neuroscience. Dr. Moffat is a Senior Fellow at the Munk School of Global Affairs and Public Policy and an advisor to the OECD’s neurotechnology policy initiative.

Professor Jennifer Chandler
Professor of Law at the Centre for Health Law, Policy and Ethics, University of Ottawa

Jennifer Chandler is Professor of Law at the Centre for Health Law, Policy and Ethics, University of Ottawa. She leads the “Neuroethics Law and Society” Research Pillar for the Brain Mind Research Institute and sits on its Scientific Advisory Council. Her research focuses on the ethical, legal and policy issues in brain sciences and the law. She teaches mental health law and neuroethics, tort law, and medico-legal issues. She is a member of the advisory board for CIHR’s Institute for Neurosciences, Mental Health and Addiction (IMNA) and serves on international editorial boards in the field of law, ethics and neuroscience, including Neuroethics, the Springer Book Series Advances in Neuroethics, and the Palgrave-MacMillan Book Series Law, Neuroscience and Human Behavior. She has published widely in legal, bioethical and health sciences journals and is the co-editor of the book Law and Mind: Mental Health Law and Policy in Canada (2016). Dr. Chandler brings a unique perspective to this panel as her research focuses on the ethical, legal and policy issues at the intersection of the brain sciences and the law. She is active in Canadian neuroscience research funding policy, and regularly contributes to Canadian governmental policy on contentious matters of biomedicine.

Ian Burkhart
Neurotech Advocate and Founder of BCI [brain-computer interface] Pioneers Coalition

Ian is a C5 tetraplegic [also known as quadriplegic] from a diving accident in 2010. He participated in a ground-breaking clinical trial using a brain-computer interface to control muscle stimulation. He is the founder of the BCI Pioneers Coalition, which works to establish ethics, guidelines and best practices for future patients, clinicians, and commercial entities engaging with BCI research. Ian serves as Vice President of the North American Spinal Cord Injury Consortium and chairs their project review committee. He has also worked with Unite2Fight Paralysis to advocate for $9 million of SCI research in his home state of Ohio. Ian has been a Reeve peer mentor since 2015 and helps lead two local SCI networking groups. As the president of the Ian Burkhart Foundation, he raises funds for accessible equipment for the independence of others with SCI. Ian is also a full-time consultant working with multiple medical device companies.

Andrew Atkinson
Manager, Emerging Science Policy, Health Canada

Andrew Atkinson is the Manager of the Emerging Sciences Policy Unit under the Strategic Policy Branch of Health Canada. He oversees coordination of science policy issues across the various regulatory and research programs under the mandate of Health Canada. Prior to Health Canada, he was a manager under Environment Canada’s CEPA new chemicals program, where he oversaw chemical and nanomaterial risk assessments, and the development of risk assessment methodologies. In parallel to domestic work, he has been actively engaged in ISO [International Organization for Standardization and OECD nanotechnology efforts.

Andrew is currently a member of the Canadian delegation to the OECD Working Party on Biotechnology, Nanotechnology and Converging Technologies (BNCT). BNCT aims to contribute original policy analysis on emerging science and technologies, such as gene editing and neurotechnology, including messaging to the global community, convening key stakeholders in the field, and making ground-breaking proposals to policy makers.

Professor Judy Illes
Professor, Division of Neurology, Department of Medicine, Faculty of Medicine, UBC [University of British Columbia]

Dr. Illes is Professor of Neurology and Distinguished Scholar in Neuroethics at the University of British Columbia. She is the Director of Neuroethics Canada, and among her many leadership positions in Canada, she is Vice Chair of the Canadian Institutes of Health Research (CIHR) Advisory Board of the Institute on Neuroscience, Mental Health and Addiction (INMHA), and chair of the International Brain Initiative (www.internationalbraininitiative.org; www.canadianbrain.ca), Director at Large of the Canadian Academy of Health Sciences, and a member of the Board of Directors of the Council of Canadian Academies.

Dr. Illes is a world-renown expert whose research, teaching and outreach are devoted to ethical, legal, social and policy challenges at the intersection of the brain sciences and biomedical ethics. She has made ground breaking contributions to neuroethical thinking for neuroscience discovery and clinical translation across the life span, including in entrepreneurship and in the commercialization of health care. Dr. Illes has a unique and comprehensive overview of the field of neurotechnology and the relevant sectors in Canada.

One concern I don’t see mentioned is bankruptcy (in other words, what happens if the company that made your neural implant goes bankrupt?) either in the panel description or in the OECD recommendation. My April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read)” explored that topic and while many of the excerpted materials present a US perspective, it’s easy to see how it could also apply in Canada and elsewhere.

For those of us on the West Coast, this session starts at 9 am. Enjoy!

*June 20, 2023: This sentence changed (We tend not to recognize that walking for the major achievement it is.) to We tend not to recognize walking for the major achievement it is.

Nanotechnology-enabled pain relief for tooth sensitivity

A November 23, 2021 news item on phys.org announces research from Australia that may lead to pain relief for anyone with sensitive teeth,

In an Australian first, researchers from the University of Queensland have used nanotechnology to develop effective ways to manage tooth sensitivity.

Dr. Chun Xu from UQ’s [University of Queensland] School of Dentistry said the approach might provide more effective long-term pain relief for people with sensitive teeth, compared to current options.

A November 23, 2021 University of Queensland press release, which originated the news item, describes the condition leading to tooth sensitivity and how the proposed solution works (Note: Links have been removed),

“Dentin tubules are located in the dentin, one of the layers below the enamel surface of your teeth,” Dr Xu said.

“When tooth enamel has been worn down, and the dentin are exposed, eating or drinking something cold or hot can cause a sudden sharp flash of pain.

“The nanomaterials used in this preclinical study can rapidly block the exposed dentin tubules and prevent the unpleasant pain.

“Our approach acts faster and lasts longer than current treatment options.

“The materials could be developed into a paste, so people who have sensitive teeth could simply apply this paste to the tooth and massage for one to three minutes.

“The next step is clinical trials.”

Tooth sensitivity affects up to 74 per cent of the population, at times severely impacting quality of life and requiring expensive treatment.

“If clinical trials are successful people will benefit from this new method that can be used at home, without the need to go to a dentist in the near future,” Dr Xu said.

“We hope this study encourages more research using nanotechnology to address dental problems.”

The team also included researchers from UQ’s Australian Institute for Bioengineering and Nanotechnology (AIBN.

Here’s a link to and a citation for the paper,

Calcium-Doped Silica Nanoparticles Mixed with Phosphate-Doped Silica Nanoparticles for Rapid and Stable Occlusion of Dentin Tubules by Yuxue Cao, Chun Xu, Patricia P. Wright, Jingyu Liu, Yueqi Kong, Yue Wang, Xiaodan Huang, Hao Song, Jianye Fu, Fang Gao, Yang Liu, Laurence J. Walsh, and Chang Lei. ACS Appl. Nano Mater. 2021, 4, 9, 8761–8769 DOI: https://doi.org/10.1021/acsanm.1c01365 Publication Date:August 25, 2021 Copyright © 2021 American Chemical Society

This paper is behind a paywall.