Tag Archives: Alexander Huth

Copyright, artificial intelligence, and thoughts about cyborgs

I’ve been holding this one for a while and now, it seems like a good followup to yesterday’s, October 20, 2025 posting about “AI and the Art of Being Human,” which touches on co-writing and my October 13, 2025 posting and its mention of “Who’s afraid of AI? Arts, Sciences , and the Futures of Intelligence,” a conference and arts festival at the University of Toronto (scroll down to the “Who’s Afraid of AI …” subhead).

With the advent of some of the latest advances in artificial intelligence (AI) and its use in creative content, the view on copyright (as a form of property) seems to be shifting. In putting this post together I’ve highlighted a blog posting that focuses on copyright and AI as it is commonly viewed. Following that piece, is a look at N. Katherine Hayles’ concept of AI as one of a number of cognitive assemblages and the implications of that concept where AI and copyright are concerned.

Then, it gets more complicated. What happens when your neural implant has an AI component? It’s question asked by members of a Canadian legal firm, McMillan LLP, a business law firm in their investigation of copyright. (The implication of this type of cognitive assemblage is not explicitly considered in Hayles’ work.) Following on the idea of a neural implant enhanced with AI, cyborg bugs (they too can have neural implants) are considered.

Uncomplicated vision of AI and copyright future

Glyn Moody’s May 15, 2025 posting on techdirt.com provides a very brief overview of the last 100 years of copyright and goes on to highlight some of the latest AI comments from tech industry titans, Note: Links have been removed,

For the last hundred years or so, the prevailing dogma has been that copyright is an unalloyed good [emphasis mine], and that more of it is better. Whether that was ever true is one question, but it is certainly not the case since we entered the digital era, for reasons explained at length in Walled Culture the book (free digital versions available). Despite that fact, recent attempts to halt the constant expansion and strengthening of copyright have all foundered. Part of the problem is that there has never been a constituency with enough political clout to counter the huge power of the copyright industry and its lobbyists.

Until now. The latest iteration of artificial intelligence has captured the attention of politicians around the world [emphasis mine]. It seems that the latter can’t do enough to promote and support it, in the hope of deriving huge economic benefits, both directly, in the form of local AI companies worth trillions, and indirectly, through increased efficiency and improved services. That current favoured status has given AI leaders permission to start saying the unsayable: that copyright is an obstacle to progress [emphasis mine], and should be reined in, or at least muzzled, in order to allow AI to reach its full potential. …

In its own suggestions for the AI Action Plan, Google spells out what this means:

Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Balanced copyright laws that ensure access to publicly available scientific papers, for example, are essential for accelerating AI in science, particularly for applications that sift through scientific literature for insights or new hypotheses.

… some of the biggest personalities in the tech world have gone even further, reported here by TechCrunch:

Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”

X’s current owner, Elon Musk, quickly replied, “I agree.”

It’s not clear what exactly brought these comments on, but they come at a time when AI companies, including OpenAI (which Musk co-founded, competes with, and is challenging in court), are facing numerous lawsuits alleging that they’ve violated copyright to train their models.

Unsurprisingly, that bold suggestion provoked howls of outrage from various players in the copyright world. That was to be expected. But the fact that big names like Musk and Dorsey were happy to cause such a storm is indicative of the changed atmosphere in the world of copyright and beyond. Indeed, there are signs that the other main intellectual monopolies – patents and trademarks – are also under pressure. Calling into question the old ways of doing things in these fields will also weaken the presumption that copyright must be preserved in its current state.

Yes, it is interesting to see tech moguls such as Jack Dorsey and Elon Musk take a more ‘enlightened’ approach to copyright. However, there may be a few twists and turns to this story as it continues to develop..

Copyright and cognitive assemblages

I need to set the stage with something coming from N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts.” She suggests that we (humans) will be members in cognitive assemblages including bacteria, plants, cells, AI, and more. She then decouples cognition from consciousness and claims entities such as bacteria, etc. are capable of ‘nonconscious cognition’.

Hayles avoids the words ‘thinking’ and ‘thought’ by using cognition and providing this meaning for the word,

… “cognition is a process that interprets information within contexts that connect it with meaning” (Hayles 2017, 22 [in “Unthought: The power of the Cognitive Nonconscious”‘ University of Chicago Press]) Note: Hayles quotes herself on pp. 8-9 in 2025’s “Bacteria to AI ..”

Hayles then develops the notion of a cognitive assemblage made up of conscious (e.g. human) and nonconscious (e.g. AI agent) cognitions. The part that most interests me is where Hayles examines copyright and cognitive assemblages,

.. what happens to the whole idea of intellectual property when an AI has perused copyrighted works during its training and incorporated them into its general sense of how to produce a picture of X or a poem about Y. Already artists and stakeholders are confronting similar issues in the age of remixing and modifying existing content. how much of a picture, or a song, needs to be altered for it not to count as copyright infringement? As legal cases like this work their way through the courts, collective intelligence will doubt continue to spread through the cultures of developed countries, as more and more people come to rely on ChatGPT and similar models for more and more tasks. Thus our cultures edge toward the realization that the very idea of intellectual property as something owned by an individual who has exclusive rights to it may need to be rethought [emphasis mine] and reconceptualized on a basis consistent with the reality of collective intelligence [emphasis mine] and the pervasiveness of cognitive assemblages in producing products of value in the contemporary era. [pp. 226 – 227 in Hayles’ 2025 book, “Bacteria to AI …]

It certainly seems as if the notion of intellectual property as personal property is being seriously challenged (and not by academics alone) but this state of affairs may be temporary. In particular, the tech titans see a benefit to loosening the rules now but what happens if they see an advantage to tightening the rules?

Neurotechnology, AI, and copyright

Neuralink states clearly that AI is part of their (and presumably other company’s) products, from the “Neuralink and AI: Bridging the Gap Between Humans and Machines,” Note: Links have been removed,

The intersection of artificial intelligence (AI) and human cognition is no longer a distant sci-fi dream—it’s rapidly becoming reality. At the forefront of this revolution is Neuralink, a neurotechnology company founded by Elon Musk in 2016, dedicated to creating brain-computer interfaces (BCIs) that seamlessly connect the human brain to machines. With AI advancing at an unprecedented pace, Neuralink aims to bridge the gap between humans and technology, offering transformative possibilities for healthcare, communication, and even human evolution. In this article, we’ll explore how Neuralink and AI are reshaping our future, the science behind this innovation, its potential applications, and the ethical questions it raises.

Robbie Grant, Yue Fei, and Adelaide Egan (plus Articling Students: Aki Kamoshida and Sara Toufic) have given their April 17, 2025 article for McMillan LLP, a Canadian business law firm, a (I couldn’t resist the wordplay) ‘thought provoking’ title, “Who Owns a Thought? Navigating Legal Issues in Neurotech” for a very interesting read, Note 1: Links have been removed, Note 2: I’ve included the numbers for the footnotes but not the footnotes themselves,

The ongoing expansion of Neurotechnology (or “neurotech”) for consumers is raising questions related to privacy and ownership of one’s thoughts, as well as what will happen when technology can go beyond merely influencing humans and enter the realm of control {emphasis mine}.

Last year, a group of McGill students built a mind-controlled wheelchair in just 30 days.[1] Brain2Qwerty, Meta’s neuroscience project which translates brain activity into text, claims to allow for users to “type” with their minds.[2] Neuralink, a company founded by Elon Musk {emphasis mine}, is beginning clinical trials in Canada testing a fully wireless, remotely controllable device to be inserted into a user’s brain {emphasis mine}.[3] This comes several years after the company released a video of a monkey playing videogames with its mind using a similar implantable device.

The authors have included a good description of neurotech, from their April 17, 2025 article,

Neurotech refers to technology that records, analyzes or modifies the neurons in the human nervous system. Neurotech can be broken down into three subcategories:

    Neuroimaging: technology that monitors brain structure and function;

    Neuromodulation: technology that influences brain function; and

    Brain-Computer Interfaces or “BCIs”: technology that facilitates direct communication between the brain’s electrical activity and an external device, sometimes referred to as brain-machine interfaces.[5]

In the medical and research context, neurotech has been deployed for decades in one form or another. Neuroimaging techniques such as EEG, MRI and PET have been used to study and analyze brain activity.[6] Neuromodulation has also been used for the treatment of various diseases, such as for deep brain stimulation for Parkinson’s disease[7] as well as for cochlear implants.[8] However, the potential for applications of neurotech beyond medical devices is a newer development, accelerated by the arrival of less intrusive neurotech devices, and innovations in artificial intelligence.

My interests here are not the same as the authors’, the focus in this posting is solely on intellectual property, from their April 17, 2025 article,

3.  Intellectual Property

As neurotech continues to advance, it is possible that it will be able to make sense of complex, subconscious data such as dreams. This will present a host of novel IP challenges, which stem from the unique nature of the data being captured, the potential for the technology to generate new insights, and the fundamental questions about ownership and rights in a realm where personal thoughts become part of the technological process.

Ownership of Summarized Data: When neurotech is able to capture subconscious thoughts, [emphasis mine] it will likely process this data into summaries that reflect aspects of an individual’s mental state. The ownership of such summaries, however, can become contentious. On the one hand, it could be argued that the individual, as the originator of their thoughts, should own the summaries. On the other hand, one could argue that the summaries would not exist but for the processing done by the technology and hence the summaries should not be owned (or exclusively owned) by the individual. The challenge may be in determining whether the summary is a transformation of the data that makes it the product of the technology, or whether it remains simply a condensed version of the individual’s thoughts, in which case it makes sense for the individual to retain ownership.

Ownership of Creative Outputs: The situation becomes more complicated if the neurotech produces creative outputs based on the subconscious thoughts captured by the technology. For example, if the neurotech uses subconscious imagery or emotions to create art, music, or other works, who owns the rights to these works? Is the individual whose thoughts were analyzed the creator of the work, or does the technology, which has facilitated and interpreted those thoughts, hold some ownership? This issue is especially pertinent in a world where AI-generated creations are already challenging traditional ideas of IP ownership. For example, in many jurisdictions, ownership of copyrightable works is tied to the individual who conceived them.[27] Uncertainty can arise in cases where works are created with neurotech, where the individual whose thoughts are captured may not be aware of the process, or their thoughts may have been altered or combined with other information to produce the works. These uncertainties could have significant implications for IP ownership, compensation, and the extent to which individuals can control or profit from the thoughts embedded in their own subconscious minds.

The reference to capturing data from subconscious thought and how that might be used in creative outputs is fascinating. This sounds like a description of one of Hayles’ cognitive assemblages with the complicating factor of a technology that is owned by a company. (Will Elon Musk be quite so cavalier about copyright when he could potentially own your thoughts and, consequently, your creative output?)

If you have the time (it’s an 11 minute read according to the authors), the whole April 17, 2025 article is worth it as the authors cover more issues (confidentiality, Health Canada oversight, etc.) than I have included here.

I also stumbled across the issue of neurotech tech companies and ownership of brain data (not copyright but you can see how this all begins to converge) in a February 29, 2024 posting “Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future” where I featured this quote (scroll down about 70% of the way),

Huth [Alexander Huth, assistant professor of Neuroscience and Computer Science at the University of Texas at Austin] and Tang [Jerry Tang, PhD student in the Department of Computer Science at the University of Texas Austin] concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste [Rafael Yuste, a Columbia University neuroscientist] said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

While I’m still with neurotech, there’s another aspect to be considered as noted in my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read).” My long read is probably 15 mins. or more.

Ending on a neurotech device/implant note, here’s a November 20, 2024 University Hospital Network (UHN) news release burbling happily about their new clinical trial involving Neurolink

UHN is proud to be selected as the first hospital in Canada to perform a pioneering neurosurgical procedure involving the Neuralink implantable device as part of the CAN-PRIME study, marking a significant milestone in the field of medical innovation.

This first procedure in Canada represents an exciting new research direction in neurosurgery and will involve the implantation of a wireless brain-computer interface (BCI) at UHN’s Toronto Western Hospital, the exclusive surgical site in Canada.

“We are incredibly proud to be at the forefront of this research advancement in neurosurgery,” says Dr. Kevin Smith, UHN’s President and CEO. “This progress is a testament to the dedication and expertise of our world-leading medical and research professionals, as well as our commitment to providing the most innovative and effective treatments for patients.

“As the first and exclusive surgical site in Canada to perform this procedure, we will be continuing to shape the future of neurological care and further defining our track record for doing what hasn’t been done.”

Neuralink has received Health Canada approval to begin recruiting for this clinical trial in Canada.

The goal of the CAN-PRIME Study (short for Canadian Precise Robotically Implanted Brain-Computer Interface), according to the study synopsis, is “to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with quadriplegia to control external devices with their thoughts [emphasis mine].”

Patients with limited or no ability to use both hands due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS), may be eligible for the CAN-PRIME Study.

“This landmark surgery has the potential to transform and improve outcomes for patients who previously had limited options,” says Dr. Andres Lozano, the Alan and Susan Hudson Cornerstone Chair in Neurosurgery at UHN and lead of the CAN-PRIME study at UHN.

The procedure, which combines state-of-the-art technology and advanced surgical techniques, will be carried out by a multidisciplinary team of neurosurgeons, neuroscientists and medical experts at UHN.

“This is a perfect example of how scientific discovery, technological innovation, and clinical expertise come together to develop new approaches to continuously improve patient care,” says Dr. Brad Wouters, Executive Vice President of Science & Research at UHN. “As Canada’s No. 1 research hospital, we are proud to be leading this important trial in Canada that has the goal to improve the lives of individuals living with quadriplegia or ALS.”

The procedure has already generated significant attention within the medical community and further studies are planned to assess its long-term effectiveness and safety.

UHN is recognized for finding solutions beyond boundaries, achieving firsts and leading the development and implementation of the latest breakthroughs in health care to benefit patients across Canada, and around the world.

Not just human brains: cyborg bugs and other biohybrids

Brain-computer interfaces don’t have to be passively accepting instructions from humans, they could also be giving instructions to humans. I don’t have anything that makes the possibility explicit except by inference. For example, let’s look at cyborg bugs, from a May 13, 2025 article “We can turn bugs into flying, crawling RoboCops. Does that mean we should” by Carlyn Zwarenstein for salon.com, Note: Links have been removed,

Imagine a tiny fly-like drone with delicate translucent wings and multi-lensed eyes, scouting out enemies who won’t even notice it’s there. Or a substantial cockroach-like robot, off on a little trip to check out a nuclear accident, wearing a cute little backpack, fearless, regardless of what the Geiger counter says. These little engineered creatures might engage in search and rescue — surveillance, environmental or otherwise — inspecting dangerous areas you would not want to send a human being into, like a tunnel or building that could collapse at any moment, or a facility where there’s been a gas leak.

These robots are blazing new ethical terrain. That’s because they are not animals performing tasks for humans, nor are they robots that draw inspiration from nature. The drone that looks like a fly is both machine and bug. The Madagascar hissing cockroach robot doesn’t just perfectly mimic the attributes that allow cockroaches to withstand radiation and poisonous air: it is a real life animal, and it is also a mechanical creature controlled remotely. These are tiny cyborgs, though even tinier ones exist, involving microbes like bacteria or even a type of white blood cell. Like fictional police officer Alex Murphy who is remade into RoboCop, these real-life cyborgs act via algorithms rather than free will.

Even as the technology for the creation of biohybrids, of which cyborgs are just the most ethically fraught category, has advanced in leaps and bounds, separate research on animal consciousness has been revealing the basis for considering insects just as we might other animals. (If you look at a tree of life, you will see that insects are indeed animals and therefore share part of our evolutionary history: even our nervous systems are not completely alien to theirs). Do we have the right to turn insects into cyborgs that we can control to do our bidding, including our military bidding, if they feel pain or have preferences or anxieties?

… the boundaries that keep an insect — a hawkmoth or cockroach, in one such project — under human control can be invisibly and automatically generated from the very backpack it wears, with researchers nudging it with neurostimulation pulses to guide it back within the boundaries of its invisible fence if it tries to stray away.

As a society, you can’t really say we’ve spent significant time considering the ethics of taking a living creature and using it literally as a machine, although reporter Ariel Yu, reviewing some of the factors to take into account in a 2024 story inspired by the backpack-wearing roaches, framed the ethical dilemma not in terms of the use of an animal as a machine — you could say using an ox to pull a cart is doing that — but specifically the fact that we’re now able to take direct control of an animal’s nervous system. Though as a society we haven’t really talked this through either, within the field of bioengineering, researchers are giving it some attention.

If it can be done to bugs and other creatures, why not us (ethics???)

The issues raised in Zwarenstein’s article could also be applied to humans. Given how I started this piece, ‘who owns a thought’ could become where did the thought come from? Could a brain-computer interface (BCI) enabled by AI be receiving thoughts from someone other than the person who has it implanted in their brain? And, if you’re the one with the BCI, how would you know? In short, could your BCI or other implant be hacked? That’s definitely a possibility researchers at Rice University (Texas, US) have prepared for according to my March 27, 2025 posting, “New security protocol to protect miniaturized wireless medical implants from cyberthreats.”

Even with no ‘interference’ and begging the question of corporate ownership, if all the thoughts weren’t ‘yours’, would you still be you?

Symbiosis and your implant

I have a striking excerpt from a September 17, 2020 post (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points),

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

This isn’t the first time I’ve used that excerpt or the first time I’ve waded into the ethics question regarding implants. For the curious, I mentioned the April 5, 2022 post “Going blind when your neural implant company flirts with bankruptcy (long read)” earlier and there’s a February 23, 2024 post “Neural (brain) implants and hype (long read)” as well as others.

So, who does own a thought?

Hayles’ notion of assemblages puts into question the notion of a ‘self’ or, if you will, an ‘I’. (Segue: Hayles will be in Toronto for the Who’s Afraid of AI? Arts, Sciences, and the Futures of Intelligence conference, October 23 – 24, 2025.) More questions have been raised with some of the older research about our relationships with AI: (1) see my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and newer research (2) see my upcoming post “A collaborating robot as part of your “extended” body.”

While I seem to have wandered into labyrinthine philosophical questions, I suspect lawyers will work towards more concrete definitions so that any questions that arise such as ‘who owns a thought’ can be argued and resolved in court.

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!