Category Archives: Neurotechnology

Dance experience visible in brain activity of audience members watching dance

Caption: Iron Skulls Co dancers Adrian Vega (left) and Diego Garrido performed the dance duet Un último recuerdo for the spectators participating in the study. Photo Credit: Juanmi Ponce

An October 2, 2024 University of Helsinki press release (also on EurekAlert but published October 15, 2024) describes research exploring the differences in brain activity between audience members with extensive dance or music experience and audiences with little of experience of either,

University of Helsinki researchers measured the brain activity of people watching a live dance performance in a real-world setting. They invited spectators with extensive experience of either dance or music as well as novices with no particular background in either of these areas.

The spectators’ brain activity was measured using EEG while they watched the live dance duet Un último recuerdo, a piece created by the Spanish Iron Skulls Co that combines contemporary dance and breakdance.

Experienced dancers respond more strongly than novices

The results showed that dance experience is detectable in spectators’ brain activity during a dance performance. The experienced dancers watching the performance displayed stronger synchronisation than the novices at the low theta frequency.

Experience of dance affects brain functions associated with the visualisation of movement in the mind, the simultaneous integration of several sensory stimuli (listening to music and watching dance) and social interaction.

When musicians watched the live dance performance, they had stronger synchrony in the delta band, which is even lower than theta. This may be associated with the musicians’ trained ability to observe rhythmic bodily movements.

Watching dance in a real-world environment is unique for our brain

The effect of watching a dance performance on brain activity has previously been studied by having subjects watch a video recording on their own in a brain research laboratory.

The present study was conducted in a real-world performance environment and shows that watching a live dance performance in a full venue activates the brain more extensively than the above setting.

“As our interaction increasingly moves to online platforms and the virtual world, it’s important to know that real-world interaction is unique – for our body and brain,” says Hanna Poikonen, the lead author of the study.

The results also emphasise the effect of a background in creative movement on the spectator experience.

“If we have practised our bodily skills, we may better understand the body language of others, which makes social interaction smoother,” Poikonen notes.

Here’s a link to and a citation for the paper,

Cortical oscillations are modified by expertise in dance and music: Evidence from live dance audience by Hanna Poikonen, Mari Tervaniemi, Laurel Trainor. European Journal of Neuroscience (EJN) Volume 60, Issue 8 October 2024 Pages 6000-6014 DOI: https://doi.org/10.1111/ejn.16525 First published online: 15 September 2024

This paper is open access.

Way back in time (see my March 6, 2012 posting), I featured some research into how experienced ballet watchers (not dancers or musicians) experienced a ballet performance.

Not quite so far back in time, I mentioned Laurel Trainor (third author listed on the paper) in a November 29, 2019 posting that featured (amongst other items) the Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s (Ontario, Canada) Institute for Music & the Mind (MIMM).

Recording brain activity with flexible tentacle electrodes

A September 4, 2024 news item on ScienceDaily announced some research in Switzerland that improves on electrodes used in brain implants, e.g., like Elon Musk’s company, Neuralink,

Neurostimulators, also known as brain pacemakers, send electrical impulses to specific areas of the brain via special electrodes. It is estimated that some 200,000 people worldwide are now benefiting from this technology, including those who suffer from Parkinson’s disease or from pathological muscle spasms. According to Mehmet Fatih Yanik, Professor of Neurotechnology at ETH Zurich, further research will greatly expand the potential applications: instead of using them exclusively to stimulate the brain, the electrodes can also be used to precisely record brain activity and analyse it for anomalies associated with neurological or psychiatric disorders. In a second step, it would be conceivable in future to treat these anomalies and disorders using electrical impulses.

A September 4, 2024 ETH Zurich press release (also on EurekAlert), which originated the news item, provides more technical detail about the work,

To this end, Yanik and his team have now developed a new type of electrode that enables more detailed and more precise recordings of brain activity over an extended period of time. These electrodes are made of bundles of extremely fine and flexible fibres of electrically conductive gold encapsulated in a polymer. Thanks to a process developed by the ETH Zurich researchers, these bundles can be inserted into the brain very slowly, which is why they do not cause any detectable damage to brain tissue.

This sets the new electrodes apart from rival technologies. Of these, perhaps the best known in the public sphere is the one from Neuralink, an Elon Musk company [emphasis mine]. In all such systems, including Neuralink’s, the electrodes are considerably wider. “The wider the probe, even if it is flexible, the greater the risk of damage to brain tissue,” Yanik explains. “Our electrodes are so fine that they can be threaded past the long processes that extend from the nerve cells in the brain. They are only around as thick as the nerve-cell processes themselves.”

The research team tested the new electrodes on the brains of rats using four bundles, each made up of 64 fibres. In principle, as Yanik explains, up to several hundred electrode fibres could be used to investigate the activity of an even greater number of brain cells. In the study, the electrodes were connected to a small recording device attached to the head of each rat, thereby enabling them to move freely.

No influence on brain activity

In the experiments, the research team was able to confirm that the probes are biocompatible and that they do not influence brain function. Because the electrodes are very close to the nerve cells, the signal quality is very good compared to other methods.

At the same time, the probes are suitable for long-term monitoring activities, with researchers recording signals from the same cells in the brains of animals for the entire duration of a ten-month experiment. Examinations showed that no brain-tissue damage occurred during this time. A further advantage is that the bundles can branch out in different directions, meaning that they can reach multiple brain areas.

Human testing to begin soon

In the study, the researcher used the new electrodes to track and analyse nerve-cell activity in various areas of the brains of rats over a period of several months. They were able to determine that nerve cells in different regions were “co-activated”. Scientists believe that this large-scale, synchronous interaction of brain cells plays a key role in the processing of complex information and memory formation. “The technology is of high interest for basic research that investigates these functions and their impairments in neurological and psychiatric disorders,” Yanik explains.

The group has teamed up with fellow researchers at the University College London in order to test diagnostic use of the new electrodes in the human brain. Specifically, the project involves epilepsy sufferers who do not respond to drug therapy. In such cases, neurosurgeons may remove a small part of the brain where the seizures originate. The idea is to use the group’s method to precisely localise the affected area of the brain prior to tissue removal.

Brain-machine interfaces

There are also plans to use the new electrodes to stimulate brain cells in humans. “This could aid the development of more effective therapies for people with neurological and psychiatric disorders”, says Yanik. In disorders such as depression, schizophrenia or OCD, there is often impairments in specific regions of the brain, which leads to problems in evaluation of information and decision making. Using the new electrodes, it might be possible to detect the pathological signals generated by the neural networks in the brain in advance, and then stimulate the brain in a way that would alleviate such disorders. Yanik also thinks that this technology may give rise to brain-machine interfaces for people with brain injuries. In such cases, the electrodes might be used to read their intentions and thereby, for example, to control prosthetics or a voice-output system.

A bundle of extremely fine electrode fibres in the brain (microscope image). (Image: Yasar TB et al. Nature Communications 2024, modified) Courtesy: ETH Zurich

Here’s a link to and a citation for the paper,

Months-long tracking of neuronal ensembles spanning multiple brain areas with Ultra-Flexible Tentacle Electrodes by Tansel Baran Yasar, Peter Gombkoto, Alexei L. Vyssotski, Angeliki D. Vavladeli, Christopher M. Lewis, Bifeng Wu, Linus Meienberg, Valter Lundegardh, Fritjof Helmchen, Wolfger von der Behrens & Mehmet Fatih Yanik. Nature Communications volume 15, Article number: 4822 (2024) DOI https://doi.org/10.1038/s41467-024-49226-9 Published online: 06 June 2024

This paper is open access.

Brain-machine interface on a chip

Caption: An entire brain-machine interface on a chip: Converting brain activity to text on one extremely small integrated system. Credit: © 2024 EPFL / Lundi13 – CC-BY-SA 4.0

News about an entire brain-machine interface (BMI) on a chip comes from an August 26, 2024 École Polytechnique Fédérale de Lausanne (EPFL) press release (also on EurekAlert), Note: Links have been removed,

Brain-machine interfaces (BMIs) have emerged as a promising solution for restoring communication and control to individuals with severe motor impairments. Traditionally, these systems have been bulky, power-intensive, and limited in their practical applications. Researchers at EPFL have developed the first high-performance, Miniaturized Brain-Machine Interface (MiBMI), offering an extremely small, low-power, highly accurate, and versatile solution. Published in the latest issue of the IEEE Journal of Solid-State Circuits and presented at the International Solid-State Circuits Conference, the MiBMI not only enhances the efficiency and scalability of brain-machine interfaces but also paves the way for practical, fully implantable devices. This technology holds the potential to significantly improve the quality of life for patients with conditions such as amyotrophic lateral sclerosis (ALS) and spinal cord injuries.

The MiBMI’s small size and low power are key features, making the system suitable for implantable applications. Its minimal invasiveness ensures safety and practicality for use in clinical and real-life settings. It is also a fully integrated system, meaning that the recording and processing are done on two extremely small chips with a total area of 8mm2. Thisis the latest in a new class of low-power BMI devices developed at Mahsa Shoaran’s Integrated Neurotechnologies Laboratory (INL) at EPFL’s IEM and Neuro X institutes. 

“MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption.This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments,” says Shoaran.  

Brain-to-text conversion involves decoding neural signals generated when a person imagines writing letters or words. In this process, electrodes implanted in the brain record neural activity associated with the motor actions of handwriting. The MiBMI chipset then processes these signals in real-time, translating the brain’s intended hand movements into corresponding digital text. This technology allows individuals, especially those with locked-in syndrome and other severe motor impairments, to communicate by simply thinking about writing, with the interface converting their thoughts into readable text on a screen.

“While the chip has not yet been integrated into a working BMI, it has processed data from previous live recordings, such as those from the Shenoy lab at Stanford [Stanford University in California, US}, converting handwriting activity into text with an impressive 91% accuracy,” says lead author Mohammed Ali Shaeri. The chip can currently decode up to 31 different characters, an achievement unmatched by any other integrated systems. “We are confident that we can decode up to 100 characters, but a handwriting dataset with more characters is not yet available,” adds Shaeri. 

Current BMIs record the data from electrodes implanted in the brain and then send these signals to a separate computer to do the decoding. The MiBMI chips records the data but also processes the information in real time—integrating a 192-channel neural recording system with a 512-channel neural decoder. This neurotechnological breakthrough is a feat of extreme miniaturization that combines expertise in integrated circuits, neural engineering, and artificial intelligence. This innovation is particularly exciting in the emerging era of neurotech startups in the BMI domain, where integration and miniaturization are key focuses. EPFL’s MiBMI offers promising insights and potential for the future of the field.

To be able to process the massive amount of information picked up by the electrodes on the miniaturized BMI, the researchers had to take a completely different approach to data analysis. They discovered that the brain activity for each letter, when the patient imagines writing it by hand, contains very specific markers, which the researchers have named distinctive neural codes (DNCs). Instead of processing thousands of bytes of data for each letter, the microchip only needs to process the DNCs, which are around a hundred bytes. This makes the system fast, accurate, and with low-power consumption.  This breakthrough also allows for faster training times, making learning how to use the BMI easier and more accessible. 

Collaborations with other teams at EPFL’s Neuro-X and IEM Institutes, such as with the laboratories of Grégoire Courtine, Silvestro Micera, Stéphanie Lacour, and David Atienza promise to create the next generation of integrated BMI systems. Shoaran, Shaeri and their team are exploring various applications for the MiBMI system beyond handwriting recognition. “We are collaborating with other research groups to test the system in different contexts, such as speech decoding and movement control. Our goal is to develop a versatile BMI that can be tailored to various neurological disorders, providing a broader range of solutions for patients,” says Shoaran.

Here’s a link to and a citation for the paper,

A 2.46-mm2 Miniaturized Brain-Machine Interface (MiBMI) Enabling 31-Class Brain-to-Text Decoding by MohammadAli Shaeri, Uisub Shin, Amitabh Yadav, Riccardo Caramellino, Gregor Rainer, Mahsa Shoaran. IEEE Journal of Solid-State Circuits Volume: 59 Issue: 11 pp. 3566-3579, Nov. 2024, DOI : 10.1109/JSSC.2024.3443254

This paper is behind a paywall.

Fungus-controlled robots

Where robots are concerned, mushrooms and other fungi aren’t usually considered as part of the equipment but one would be wrong according to a September 4, 2024 news item on ScienceDaily,

Building a robot takes time, technical skill, the right materials — and sometimes, a little fungus.

In creating a pair of new robots, Cornell University researchers cultivated an unlikely component, one found on the forest floor: fungal mycelia.

By harnessing mycelia’s innate electrical signals, the researchers discovered a new way of controlling “biohybrid” robots that can potentially react to their environment better than their purely synthetic counterparts.

An August 28, 2024 Cornell University news release (also on EurekAlert but published August 29, 2024) by David Nutt, which originated the news item, describes this (I’m tempted to call it, revolutionary) new technique, Note: Links have been removed.

“This paper is the first of many that will use the fungal kingdom to provide environmental sensing and command signals to robots to improve their levels of autonomy,” Shepherd [Rob Shepherd, professor of mechanical and aerospace engineering at Cornell University] said. “By growing mycelium into the electronics of a robot, we were able to allow the biohybrid machine to sense and respond to the environment. In this case we used light as the input, but in the future it will be chemical. The potential for future robots could be to sense soil chemistry in row crops and decide when to add more fertilizer, for example, perhaps mitigating downstream effects of agriculture like harmful algal blooms.”

In designing the robots of tomorrow, engineers have taken many of their cues from the animal kingdom, with machines that mimic the way living creatures move, sense their environment and even regulate their internal temperature through perspiration. Some robots have incorporated living material, such as cells from muscle tissue, but those complex biological systems are difficult to keep healthy and functional. It’s not always easy, after all, to keep a robot alive.

Mycelia are the underground vegetative part of mushrooms, and they have a number of advantages. They can grow in harsh conditions. They also have the ability to sense chemical and biological signals and respond to multiple inputs.

“If you think about a synthetic system – let’s say, any passive sensor – we just use it for one purpose. But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals,” Mishra [Anand Mishra, a research associate in the Organic Robotics Lab at Cornell University] said. “That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”

However, finding a way to integrate mushrooms and robots requires more than just tech savvy and a green thumb.

“You have to have a background in mechanical engineering, electronics, some mycology, some neurobiology, some kind of signal processing,” Mishra said. “All these fields come together to build this kind of system.”

Mishra collaborated with a range of interdisciplinary researchers. He consulted with Bruce Johnson, senior research associate in neurobiology and behavior, and learned how to record the electrical signals that are carried in the neuron-like ionic channels in the mycelia membrane. Kathie Hodge, associate professor of plant pathology and plant-microbe biology in the School of Integrative Plant Science in the College of Agriculture and Life Sciences, taught Mishra how to grow clean mycelia cultures, because contamination turns out to be quite a challenge when you are sticking electrodes in fungus.

The system Mishra developed consists of an electrical interface that blocks out vibration and electromagnetic interference and accurately records and processes the mycelia’s electrophysiological activity in real time, and a controller inspired by central pattern generators – a kind of neural circuit. Essentially, the system reads the raw electrical signal, processes it and identifies the mycelia’s rhythmic spikes, then converts that information into a digital control signal, which is sent to the robot’s actuators.

Two biohybrid robots were built: a soft robot shaped like a spider and a wheeled bot.

The robots completed three experiments. In the first, the robots walked and rolled, respectively, as a response to the natural continuous spikes in the mycelia’s signal. Then the researchers stimulated the robots with ultraviolet light, which caused them to change their gaits, demonstrating mycelia’s ability to react to their environment. In the third scenario, the researchers were able to override the mycelia’s native signal entirely.

The implications go far beyond the fields of robotics and fungi.

“This kind of project is not just about controlling a robot,” Mishra said. “It is also about creating a true connection with the living system. Because once you hear the signal, you also understand what’s going on. Maybe that signal is coming from some kind of stresses. So you’re seeing the physical response, because those signals we can’t visualize, but the robot is making a visualization.”

Co-authors include Johnson, Hodge, Jaeseok Kim with the University of Florence, Italy, and undergraduate research assistant Hannah Baghdadi.

The research was supported by the National Science Foundation (NSF) CROPPS Science and Technology Center; the U.S. Department of Agriculture’s National Institute of Food and Agriculture; and the NSF Signal in Soil program.

Here’s a link to and a citation for the paper,

Sensorimotor control of robots mediated by electrophysiological measurements of fungal mycelia by Anand Kumar Mishra, Jaeseok Kim, Hannah Baghdadi, Bruce R. Johnson, Kathie T. Hodge, and Robert F. Shepherd. Science Robotics 28 Aug 2024 Vol 9, Issue 93 DOI: 10.1126/scirobotics.adk8019

This paper is behind a paywall.

Digi, Nano, Bio, Neuro – why should we care more about converging technologies?

Personality in focus: the convergence of biology and computer technology could make extremely sensitive data available. (Image: by-​studio / AdobeStock) [downloaded from https://ethz.ch/en/news-and-events/eth-news/news/2024/05/digi-nano-bio-neuro-or-why-we-should-care-more-about-converging-technologies.html]

I gave a guest lecture some years ago where I mentioned that I thought the real issue with big data and AI (artificial intelligence) lay in combining them (or convergence). These days, it seems I was insufficiently imaginative as researchers from ETH Zurich have taken the notion much further.

From a May 7, 2024 ETH Zurich press release (also on EurekAlert), Note: You’ll see in the ‘References’ some extra words, ‘external page’ is self-explanatory but ‘call made’ remains a mystery to me,

In my research, I [Dirk Helbing, Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences and associated with the Department of Computer Science at ETH Zurich.] deal with the consequences of digitalisation for people, society and democracy. In this context, it is also important to keep an eye on their convergence in computer and life sciences – i.e. what becomes possible when digital technologies grow increasingly together with biotechnology, neurotechnology and nanotechnology.

Converging technologies are seen as a breeding ground for far-​reaching innovations. However, they are blurring the boundaries between the physical, biological and digital worlds. Conventional regulations are becoming ineffective as a result.

In a joint study I conducted with my co-​author Marcello Ienca, we have recently examined the risks and societal challenges of technological convergence – and concluded that the effects for individuals and society are far-​reaching.

We would like to draw attention to the challenges and risks of converging technologies and explain why we consider it necessary to accompany technological developments internationally with strict regulations.

For several years now, everyone has been able to observe, within the context of digitalisation, the consequences of leaving technological change to market forces alone without effective regulation.

Misinformation and manipulation on the web

The Digital Manifesto was published in 2015 – almost ten years ago.1 Nine European experts, including one from ETH Zurich, issued an urgent warning against scoring, i.e. the evaluation of people, and big nudging,2 a subtle form of digital manipulation. The latter is based on personality profiles created using cookies and other surveillance data. A little later, the Cambridge Analytica scandal alerted the world to how the data analysis company had been using personalised ads (microtargeting) in an attempt to manipulate voting behaviour in democratic elections.

This has brought democracies around the world under considerable pressure. Propaganda, fake news and hate speech are polarising and sowing doubt, while privacy is on the decline. We are in the midst of an international information war for control of our minds, in which advertising companies, tech corporations, secret services and the military are fighting to exert an influence on our mindset and behaviour. The European Union has adopted the AI Act in an attempt to curb these dangers.

However, digital technologies have developed at a breathtaking pace, and new possibilities for manipulation are already emerging. The merging of digital and nanotechnology with modern biotechnology and neurotechnology makes revolutionary applications possible that had been hardly imaginable before.

Microrobots for precision medicine

In personalised medicine, for example, the advancing miniaturisation of electronics is making it increasingly possible to connect living organisms and humans with networked sensors and computing power. The WEF [World Economic Forum] proclaimed the “Internet of Bodies” as early as 2020.3, 4

One example that combines conventional medication with a monitoring function is digital pills. These could control medication and record a patient’s physiological data (see this blog post).

Experts expect sensor technology to reach the nanoscale. Magnetic nanoparticles or nanoelectronic components, i.e. tiny particles invisible to the naked eye with a diameter up to 100 nanometres, would make it possible to transport active substances, interact with cells and record vast amounts of data on bodily functions. If introduced into the body, it is hoped that diseases could be detected at an early stage and treated in a personalised manner. This is often referred to as high-​precision medicine.

Nano-​electrodes record brain function

Miniaturised electrodes that can simultaneously measure and manipulate the activity of thousands of neurons coupled with ever-​improving AI tools for the analysis of brain signals are approaches that are now leading to much-​discussed advances in the brain-​computer interface. Brain activity mapping is also on the agenda. Thanks to nano-​neurotechnology, we could soon envisage smartphones and other AI applications being controlled directly by thoughts.

“Long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.” Dirk Helbling

Large-​scale projects to map the human brain are also likely to benefit from this.5 In future, brain activity mapping will not only be able to read our thoughts and feelings but also make them possible of being influenced remotely – the latter would probably be a lot more effective than previous manipulation methods like big nudging.

However, conventional electrodes are not suitable for permanent connection between cells and electronics – this requires durable and biocompatible interfaces. This has given rise to the suggestion of transmitting signals optogenetically, i.e. to control genes in special cells with light pulses.6 This would make the implementation of amazing circuits possible (see this ETH News article [November 11, 2014 press release] “Controlling genes with thoughts” ).

The downside of convergence

Admittedly, the applications mentioned above may sound futuristic, with most of them still visions or in their early stages of development. However, a lot of research is being conducted worldwide and at full speed. The military is also interested in using converging technologies for its own purposes. 7, 8

The downside of convergence is the considerable risks involved, such as state or private players gaining access to highly sensitive data and misusing it to monitor and influence people. The more connected our bodies become, the more vulnerable we will be to cybercrime and hacking. It cannot be ruled out that military applications exist already.5 One thing is clear, however: long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.

“We need to regain control of our personal data. To do this, we need genuine informational self-​determination.” Dirk Helbling

The problem is that existing regulations are specific and insufficient to keep technological convergence in check. But how are we to retain control over our lives if it becomes increasingly possible to influence our thoughts, feelings and decisions by digital means?

Converging global regulation is needed

In our recent paper we conclude that any regulation of converging technologies would have to be based on converging international regulations. Accordingly, we outline a new global regulatory framework and propose ten governance principles to close the looming regulatory gap. 9

The framework emphasises the need for safeguards to protect bodily and mental functions from unauthorised interference and to ensure personal integrity and privacy by, for example. establishing neurorights.

To minimise risks and prevent abuse, future regulations should be inclusive, transparent and trustworthy. The principle of participatory governance is key, which would have to involve all the relevant groups and ensure that the concerns of affected minorities are also taken into account in decision-​making processes.

Finally, we need to regain control of our personal data. To accomplish this, we need genuine informational self-​determination. This would also have to apply to the digital twins of our body and personality, because they can be used to hack our health and our way of thinking – for good or for bad.10

With our contribution, we would like to initiate public debate about converging technologies. Despite its major relevance, we believe that too little attention is being paid to this topic. Continuous discourse on benefits, risks and sensible rules can help to steer technological convergence in such a way that it serves people instead of harming them.

Dirk Helbing wrote this article together with external page Marcello Ienca call_made, who previously worked at ETH Zurich and EPFL and is now Assistant Professor of Ethics of AI and Neuroscience at the Technical University of Munich.

References

1 Digital-​Manifest: external page Digitale Demokratie statt Datendiktatur call_made (2015) Spektrum der Wissenschaft

2 external page Sie sind das Ziel! call_made (2024) Schweizer Monat

3 external page The Internet of Bodies Is Here: Tackling new challenges of technology governance call_made (2020) World Economic Forum

4 external page Tracking how our bodies work could change our lives call_made (2020) World Economic Forum

5 external page Nanotools for Neuroscience and Brain Activity Mapping call_made (2013) ACS Nano

6 external page Innovationspotenziale der Mensch-​Maschine-Interaktion call_made (2016) Deutsche Akademie der Technikwissenschaften

7 external page Human Augmentation – The Dawn of a New Paradigm. A strategic implications project call_made (2021) UK Ministry of Defence

8 external page Behavioural change as the core of warfighting call_made (2017) Militaire Spectator

9 Helbing D, Ienca M: external page Why converging technologies need converging international regulation call_made (2024) Ethics and Information Technology

10 external page Who is Messing with Your Digital Twin? Body, Mind, and Soul for Sale? call_made Dirk Helbing TEDx Talk (2023)

Here’s a second link to and citation for the paper,

Why converging technologies need converging international regulation by Dirk Helbing & Marcello Ienca. Ethics and Information Technology Volume 26, article number 15, (2024) DOI: 10.1007/s10676-024-09756-8 Published: 28 February 2024

This paper is open access.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!