Tag Archives: brain-computer interface (BCI)

Copyright, artificial intelligence, and thoughts about cyborgs

I’ve been holding this one for a while and now, it seems like a good followup to yesterday’s, October 20, 2025 posting about “AI and the Art of Being Human,” which touches on co-writing and my October 13, 2025 posting and its mention of “Who’s afraid of AI? Arts, Sciences , and the Futures of Intelligence,” a conference and arts festival at the University of Toronto (scroll down to the “Who’s Afraid of AI …” subhead).

With the advent of some of the latest advances in artificial intelligence (AI) and its use in creative content, the view on copyright (as a form of property) seems to be shifting. In putting this post together I’ve highlighted a blog posting that focuses on copyright and AI as it is commonly viewed. Following that piece, is a look at N. Katherine Hayles’ concept of AI as one of a number of cognitive assemblages and the implications of that concept where AI and copyright are concerned.

Then, it gets more complicated. What happens when your neural implant has an AI component? It’s question asked by members of a Canadian legal firm, McMillan LLP, a business law firm in their investigation of copyright. (The implication of this type of cognitive assemblage is not explicitly considered in Hayles’ work.) Following on the idea of a neural implant enhanced with AI, cyborg bugs (they too can have neural implants) are considered.

Uncomplicated vision of AI and copyright future

Glyn Moody’s May 15, 2025 posting on techdirt.com provides a very brief overview of the last 100 years of copyright and goes on to highlight some of the latest AI comments from tech industry titans, Note: Links have been removed,

For the last hundred years or so, the prevailing dogma has been that copyright is an unalloyed good [emphasis mine], and that more of it is better. Whether that was ever true is one question, but it is certainly not the case since we entered the digital era, for reasons explained at length in Walled Culture the book (free digital versions available). Despite that fact, recent attempts to halt the constant expansion and strengthening of copyright have all foundered. Part of the problem is that there has never been a constituency with enough political clout to counter the huge power of the copyright industry and its lobbyists.

Until now. The latest iteration of artificial intelligence has captured the attention of politicians around the world [emphasis mine]. It seems that the latter can’t do enough to promote and support it, in the hope of deriving huge economic benefits, both directly, in the form of local AI companies worth trillions, and indirectly, through increased efficiency and improved services. That current favoured status has given AI leaders permission to start saying the unsayable: that copyright is an obstacle to progress [emphasis mine], and should be reined in, or at least muzzled, in order to allow AI to reach its full potential. …

In its own suggestions for the AI Action Plan, Google spells out what this means:

Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances. These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation. Balanced copyright laws that ensure access to publicly available scientific papers, for example, are essential for accelerating AI in science, particularly for applications that sift through scientific literature for insights or new hypotheses.

… some of the biggest personalities in the tech world have gone even further, reported here by TechCrunch:

Jack Dorsey, co-founder of Twitter (now X) and Square (now Block), sparked a weekend’s worth of debate around intellectual property, patents, and copyright, with a characteristically terse post declaring, “delete all IP law.”

X’s current owner, Elon Musk, quickly replied, “I agree.”

It’s not clear what exactly brought these comments on, but they come at a time when AI companies, including OpenAI (which Musk co-founded, competes with, and is challenging in court), are facing numerous lawsuits alleging that they’ve violated copyright to train their models.

Unsurprisingly, that bold suggestion provoked howls of outrage from various players in the copyright world. That was to be expected. But the fact that big names like Musk and Dorsey were happy to cause such a storm is indicative of the changed atmosphere in the world of copyright and beyond. Indeed, there are signs that the other main intellectual monopolies – patents and trademarks – are also under pressure. Calling into question the old ways of doing things in these fields will also weaken the presumption that copyright must be preserved in its current state.

Yes, it is interesting to see tech moguls such as Jack Dorsey and Elon Musk take a more ‘enlightened’ approach to copyright. However, there may be a few twists and turns to this story as it continues to develop..

Copyright and cognitive assemblages

I need to set the stage with something coming from N. Katherine Hayles’ 2025 book “Bacteria to AI; Human Futures with our Nonhuman Symbionts.” She suggests that we (humans) will be members in cognitive assemblages including bacteria, plants, cells, AI, and more. She then decouples cognition from consciousness and claims entities such as bacteria, etc. are capable of ‘nonconscious cognition’.

Hayles avoids the words ‘thinking’ and ‘thought’ by using cognition and providing this meaning for the word,

… “cognition is a process that interprets information within contexts that connect it with meaning” (Hayles 2017, 22 [in “Unthought: The power of the Cognitive Nonconscious”‘ University of Chicago Press]) Note: Hayles quotes herself on pp. 8-9 in 2025’s “Bacteria to AI ..”

Hayles then develops the notion of a cognitive assemblage made up of conscious (e.g. human) and nonconscious (e.g. AI agent) cognitions. The part that most interests me is where Hayles examines copyright and cognitive assemblages,

.. what happens to the whole idea of intellectual property when an AI has perused copyrighted works during its training and incorporated them into its general sense of how to produce a picture of X or a poem about Y. Already artists and stakeholders are confronting similar issues in the age of remixing and modifying existing content. how much of a picture, or a song, needs to be altered for it not to count as copyright infringement? As legal cases like this work their way through the courts, collective intelligence will doubt continue to spread through the cultures of developed countries, as more and more people come to rely on ChatGPT and similar models for more and more tasks. Thus our cultures edge toward the realization that the very idea of intellectual property as something owned by an individual who has exclusive rights to it may need to be rethought [emphasis mine] and reconceptualized on a basis consistent with the reality of collective intelligence [emphasis mine] and the pervasiveness of cognitive assemblages in producing products of value in the contemporary era. [pp. 226 – 227 in Hayles’ 2025 book, “Bacteria to AI …]

It certainly seems as if the notion of intellectual property as personal property is being seriously challenged (and not by academics alone) but this state of affairs may be temporary. In particular, the tech titans see a benefit to loosening the rules now but what happens if they see an advantage to tightening the rules?

Neurotechnology, AI, and copyright

Neuralink states clearly that AI is part of their (and presumably other company’s) products, from the “Neuralink and AI: Bridging the Gap Between Humans and Machines,” Note: Links have been removed,

The intersection of artificial intelligence (AI) and human cognition is no longer a distant sci-fi dream—it’s rapidly becoming reality. At the forefront of this revolution is Neuralink, a neurotechnology company founded by Elon Musk in 2016, dedicated to creating brain-computer interfaces (BCIs) that seamlessly connect the human brain to machines. With AI advancing at an unprecedented pace, Neuralink aims to bridge the gap between humans and technology, offering transformative possibilities for healthcare, communication, and even human evolution. In this article, we’ll explore how Neuralink and AI are reshaping our future, the science behind this innovation, its potential applications, and the ethical questions it raises.

Robbie Grant, Yue Fei, and Adelaide Egan (plus Articling Students: Aki Kamoshida and Sara Toufic) have given their April 17, 2025 article for McMillan LLP, a Canadian business law firm, a (I couldn’t resist the wordplay) ‘thought provoking’ title, “Who Owns a Thought? Navigating Legal Issues in Neurotech” for a very interesting read, Note 1: Links have been removed, Note 2: I’ve included the numbers for the footnotes but not the footnotes themselves,

The ongoing expansion of Neurotechnology (or “neurotech”) for consumers is raising questions related to privacy and ownership of one’s thoughts, as well as what will happen when technology can go beyond merely influencing humans and enter the realm of control {emphasis mine}.

Last year, a group of McGill students built a mind-controlled wheelchair in just 30 days.[1] Brain2Qwerty, Meta’s neuroscience project which translates brain activity into text, claims to allow for users to “type” with their minds.[2] Neuralink, a company founded by Elon Musk {emphasis mine}, is beginning clinical trials in Canada testing a fully wireless, remotely controllable device to be inserted into a user’s brain {emphasis mine}.[3] This comes several years after the company released a video of a monkey playing videogames with its mind using a similar implantable device.

The authors have included a good description of neurotech, from their April 17, 2025 article,

Neurotech refers to technology that records, analyzes or modifies the neurons in the human nervous system. Neurotech can be broken down into three subcategories:

    Neuroimaging: technology that monitors brain structure and function;

    Neuromodulation: technology that influences brain function; and

    Brain-Computer Interfaces or “BCIs”: technology that facilitates direct communication between the brain’s electrical activity and an external device, sometimes referred to as brain-machine interfaces.[5]

In the medical and research context, neurotech has been deployed for decades in one form or another. Neuroimaging techniques such as EEG, MRI and PET have been used to study and analyze brain activity.[6] Neuromodulation has also been used for the treatment of various diseases, such as for deep brain stimulation for Parkinson’s disease[7] as well as for cochlear implants.[8] However, the potential for applications of neurotech beyond medical devices is a newer development, accelerated by the arrival of less intrusive neurotech devices, and innovations in artificial intelligence.

My interests here are not the same as the authors’, the focus in this posting is solely on intellectual property, from their April 17, 2025 article,

3.  Intellectual Property

As neurotech continues to advance, it is possible that it will be able to make sense of complex, subconscious data such as dreams. This will present a host of novel IP challenges, which stem from the unique nature of the data being captured, the potential for the technology to generate new insights, and the fundamental questions about ownership and rights in a realm where personal thoughts become part of the technological process.

Ownership of Summarized Data: When neurotech is able to capture subconscious thoughts, [emphasis mine] it will likely process this data into summaries that reflect aspects of an individual’s mental state. The ownership of such summaries, however, can become contentious. On the one hand, it could be argued that the individual, as the originator of their thoughts, should own the summaries. On the other hand, one could argue that the summaries would not exist but for the processing done by the technology and hence the summaries should not be owned (or exclusively owned) by the individual. The challenge may be in determining whether the summary is a transformation of the data that makes it the product of the technology, or whether it remains simply a condensed version of the individual’s thoughts, in which case it makes sense for the individual to retain ownership.

Ownership of Creative Outputs: The situation becomes more complicated if the neurotech produces creative outputs based on the subconscious thoughts captured by the technology. For example, if the neurotech uses subconscious imagery or emotions to create art, music, or other works, who owns the rights to these works? Is the individual whose thoughts were analyzed the creator of the work, or does the technology, which has facilitated and interpreted those thoughts, hold some ownership? This issue is especially pertinent in a world where AI-generated creations are already challenging traditional ideas of IP ownership. For example, in many jurisdictions, ownership of copyrightable works is tied to the individual who conceived them.[27] Uncertainty can arise in cases where works are created with neurotech, where the individual whose thoughts are captured may not be aware of the process, or their thoughts may have been altered or combined with other information to produce the works. These uncertainties could have significant implications for IP ownership, compensation, and the extent to which individuals can control or profit from the thoughts embedded in their own subconscious minds.

The reference to capturing data from subconscious thought and how that might be used in creative outputs is fascinating. This sounds like a description of one of Hayles’ cognitive assemblages with the complicating factor of a technology that is owned by a company. (Will Elon Musk be quite so cavalier about copyright when he could potentially own your thoughts and, consequently, your creative output?)

If you have the time (it’s an 11 minute read according to the authors), the whole April 17, 2025 article is worth it as the authors cover more issues (confidentiality, Health Canada oversight, etc.) than I have included here.

I also stumbled across the issue of neurotech tech companies and ownership of brain data (not copyright but you can see how this all begins to converge) in a February 29, 2024 posting “Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future” where I featured this quote (scroll down about 70% of the way),

Huth [Alexander Huth, assistant professor of Neuroscience and Computer Science at the University of Texas at Austin] and Tang [Jerry Tang, PhD student in the Department of Computer Science at the University of Texas Austin] concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste [Rafael Yuste, a Columbia University neuroscientist] said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

While I’m still with neurotech, there’s another aspect to be considered as noted in my April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read).” My long read is probably 15 mins. or more.

Ending on a neurotech device/implant note, here’s a November 20, 2024 University Hospital Network (UHN) news release burbling happily about their new clinical trial involving Neurolink

UHN is proud to be selected as the first hospital in Canada to perform a pioneering neurosurgical procedure involving the Neuralink implantable device as part of the CAN-PRIME study, marking a significant milestone in the field of medical innovation.

This first procedure in Canada represents an exciting new research direction in neurosurgery and will involve the implantation of a wireless brain-computer interface (BCI) at UHN’s Toronto Western Hospital, the exclusive surgical site in Canada.

“We are incredibly proud to be at the forefront of this research advancement in neurosurgery,” says Dr. Kevin Smith, UHN’s President and CEO. “This progress is a testament to the dedication and expertise of our world-leading medical and research professionals, as well as our commitment to providing the most innovative and effective treatments for patients.

“As the first and exclusive surgical site in Canada to perform this procedure, we will be continuing to shape the future of neurological care and further defining our track record for doing what hasn’t been done.”

Neuralink has received Health Canada approval to begin recruiting for this clinical trial in Canada.

The goal of the CAN-PRIME Study (short for Canadian Precise Robotically Implanted Brain-Computer Interface), according to the study synopsis, is “to evaluate the safety of our implant (N1) and surgical robot (R1) and assess the initial functionality of our BCI for enabling people with quadriplegia to control external devices with their thoughts [emphasis mine].”

Patients with limited or no ability to use both hands due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS), may be eligible for the CAN-PRIME Study.

“This landmark surgery has the potential to transform and improve outcomes for patients who previously had limited options,” says Dr. Andres Lozano, the Alan and Susan Hudson Cornerstone Chair in Neurosurgery at UHN and lead of the CAN-PRIME study at UHN.

The procedure, which combines state-of-the-art technology and advanced surgical techniques, will be carried out by a multidisciplinary team of neurosurgeons, neuroscientists and medical experts at UHN.

“This is a perfect example of how scientific discovery, technological innovation, and clinical expertise come together to develop new approaches to continuously improve patient care,” says Dr. Brad Wouters, Executive Vice President of Science & Research at UHN. “As Canada’s No. 1 research hospital, we are proud to be leading this important trial in Canada that has the goal to improve the lives of individuals living with quadriplegia or ALS.”

The procedure has already generated significant attention within the medical community and further studies are planned to assess its long-term effectiveness and safety.

UHN is recognized for finding solutions beyond boundaries, achieving firsts and leading the development and implementation of the latest breakthroughs in health care to benefit patients across Canada, and around the world.

Not just human brains: cyborg bugs and other biohybrids

Brain-computer interfaces don’t have to be passively accepting instructions from humans, they could also be giving instructions to humans. I don’t have anything that makes the possibility explicit except by inference. For example, let’s look at cyborg bugs, from a May 13, 2025 article “We can turn bugs into flying, crawling RoboCops. Does that mean we should” by Carlyn Zwarenstein for salon.com, Note: Links have been removed,

Imagine a tiny fly-like drone with delicate translucent wings and multi-lensed eyes, scouting out enemies who won’t even notice it’s there. Or a substantial cockroach-like robot, off on a little trip to check out a nuclear accident, wearing a cute little backpack, fearless, regardless of what the Geiger counter says. These little engineered creatures might engage in search and rescue — surveillance, environmental or otherwise — inspecting dangerous areas you would not want to send a human being into, like a tunnel or building that could collapse at any moment, or a facility where there’s been a gas leak.

These robots are blazing new ethical terrain. That’s because they are not animals performing tasks for humans, nor are they robots that draw inspiration from nature. The drone that looks like a fly is both machine and bug. The Madagascar hissing cockroach robot doesn’t just perfectly mimic the attributes that allow cockroaches to withstand radiation and poisonous air: it is a real life animal, and it is also a mechanical creature controlled remotely. These are tiny cyborgs, though even tinier ones exist, involving microbes like bacteria or even a type of white blood cell. Like fictional police officer Alex Murphy who is remade into RoboCop, these real-life cyborgs act via algorithms rather than free will.

Even as the technology for the creation of biohybrids, of which cyborgs are just the most ethically fraught category, has advanced in leaps and bounds, separate research on animal consciousness has been revealing the basis for considering insects just as we might other animals. (If you look at a tree of life, you will see that insects are indeed animals and therefore share part of our evolutionary history: even our nervous systems are not completely alien to theirs). Do we have the right to turn insects into cyborgs that we can control to do our bidding, including our military bidding, if they feel pain or have preferences or anxieties?

… the boundaries that keep an insect — a hawkmoth or cockroach, in one such project — under human control can be invisibly and automatically generated from the very backpack it wears, with researchers nudging it with neurostimulation pulses to guide it back within the boundaries of its invisible fence if it tries to stray away.

As a society, you can’t really say we’ve spent significant time considering the ethics of taking a living creature and using it literally as a machine, although reporter Ariel Yu, reviewing some of the factors to take into account in a 2024 story inspired by the backpack-wearing roaches, framed the ethical dilemma not in terms of the use of an animal as a machine — you could say using an ox to pull a cart is doing that — but specifically the fact that we’re now able to take direct control of an animal’s nervous system. Though as a society we haven’t really talked this through either, within the field of bioengineering, researchers are giving it some attention.

If it can be done to bugs and other creatures, why not us (ethics???)

The issues raised in Zwarenstein’s article could also be applied to humans. Given how I started this piece, ‘who owns a thought’ could become where did the thought come from? Could a brain-computer interface (BCI) enabled by AI be receiving thoughts from someone other than the person who has it implanted in their brain? And, if you’re the one with the BCI, how would you know? In short, could your BCI or other implant be hacked? That’s definitely a possibility researchers at Rice University (Texas, US) have prepared for according to my March 27, 2025 posting, “New security protocol to protect miniaturized wireless medical implants from cyberthreats.”

Even with no ‘interference’ and begging the question of corporate ownership, if all the thoughts weren’t ‘yours’, would you still be you?

Symbiosis and your implant

I have a striking excerpt from a September 17, 2020 post (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points),

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

This isn’t the first time I’ve used that excerpt or the first time I’ve waded into the ethics question regarding implants. For the curious, I mentioned the April 5, 2022 post “Going blind when your neural implant company flirts with bankruptcy (long read)” earlier and there’s a February 23, 2024 post “Neural (brain) implants and hype (long read)” as well as others.

So, who does own a thought?

Hayles’ notion of assemblages puts into question the notion of a ‘self’ or, if you will, an ‘I’. (Segue: Hayles will be in Toronto for the Who’s Afraid of AI? Arts, Sciences, and the Futures of Intelligence conference, October 23 – 24, 2025.) More questions have been raised with some of the older research about our relationships with AI: (1) see my December 3, 2021 posting “True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)” and newer research (2) see my upcoming post “A collaborating robot as part of your “extended” body.”

While I seem to have wandered into labyrinthine philosophical questions, I suspect lawyers will work towards more concrete definitions so that any questions that arise such as ‘who owns a thought’ can be argued and resolved in court.

Memristor-based brain-computer interfaces (BCIs)

Brief digression: For anyone unfamiliar with memristors, they are, for want of better terms, devices or elements that have memory in addition to their resistive properties. (For more see: R Jagan Mohan Rao’s undated article ‘What is a Memristor? Principle, Advantages, Applications” on InsstrumentalTools.com)

A March 27,2025 news item on ScienceDaily announces a memristor-enhanced brain-computer interface (BCI),

Summary: Researchers have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). This research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve [emphasis mine] with changing brain signals.

So, the decoder in the BCI will ‘co-evolve’ with your brain? hmmm Also, where is this ‘memristor chip’? The video demo (https://assets-eu.researchsquare.com/files/rs-3966063/v1/7a84dc7037b11bad96ae0378.mp4) shows a volunteer wearing cap attached by cable to an intermediary device (an enlarged chip with a brain on it?) which is in turn attached to a screen. I believe some artistic licence has been taken with regard to the brain on the chip..

Caption: Researchers propose an adaptive neuromorphic decoder supporting brain-machine co-evolution. Credit: The University of Hong Kong

A March 25, 2025 University of Hong Kong (HKU) press release (also on EurekAlert but published on March 26, 2025), which originated the news item, explains more about memristors, BCIs, and co-evolution,

Professor Ngai Wong and Dr Zhengwu Liu from the Department of Electrical and Electronic Engineering at the Faculty of Engineering at the University of Hong Kong (HKU), in collaboration with research teams at Tsinghua University and Tianjin University, have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). Published in Nature Electronics, this research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve with changing brain signals.

A brain-computer interface (BCI) is a computer-based system that creates a direct communication pathway between the brain and external devices, such as computers, allowing individuals to control these devices or applications purely through brain activity, bypassing the need for traditional muscle movements or the nervous system. This technology holds immense potential across a wide range of fields, from assistive technologies to neurological rehabilitation. However, traditional BCIs still face challenges.

“The brain is a complex dynamic system with signals that constantly evolve and fluctuate. This poses significant challenges for BCIs to maintain stable performance over time,” said Professor Wong and Dr Liu. “Additionally, as brain-machine links grow in complexity, traditional computing architectures struggle with real-time processing demands.”

The collaborative research addressed these challenges by developing a 128K-cell memristor chip that serves as an adaptive brain signal decoder. The team introduced a hardware-efficient one-step memristor decoding strategy that significantly reduces computational complexity while maintaining high accuracy. Dr Liu, a Research Assistant Professor in the Department of Electrical and Electronic Engineering at HKU, contributed as a co-first author to this groundbreaking work.

In real-world testing, the system demonstrated impressive capabilities in a four-degree-of-freedom drone flight control task, achieving 85.17% decoding accuracy—equivalent to software-based methods—while consuming 1,643 times less energy and offering 216 times higher normalised speed than conventional CPU-based systems.

Most significantly, the researchers developed an interactive update framework that enables the memristor decoder and brain signals to adapt to each other naturally. This co-evolution, demonstrated in experiments involving ten participants over six-hour sessions, resulted in approximately 20% higher accuracy compared to systems without co-evolution capability.

“Our work on optimising the computational models and error mitigation techniques was crucial to ensure that the theoretical advantages of memristor technology could be realised in practical BCI applications,” explained Dr Liu. “The one-step decoding approach we developed together significantly reduces both computational complexity and hardware costs, making the technology more accessible for a wide range of practical scenarios.”

Professor Wong further emphasised, “More importantly, our interactive updating framework enables co-evolution between the memristor decoder and brain signals, addressing the long-term stability issues faced by traditional BCIs. This co-evolution mechanism allows the system to adapt to natural changes in brain signals over time, greatly enhancing decoding stability and accuracy during prolonged use.”

Building on the success of this research, the team is now expanding their work through a new collaboration with HKU Li Ka Shing Faculty of Medicine and Queen Mary Hospital to develop a multimodal large language model for epilepsy data analysis.

“This new collaboration aims to extend our work on brain signal processing to the critical area of epilepsy diagnosis and treatment,” said Professor Wong and Dr Liu. “By combining our expertise in advanced algorithms and neuromorphic computing with clinical data and expertise, we hope to develop more accurate and efficient models to assist epilepsy patients.”

The research represents a significant step forward in human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems, opening new possibilities for medical applications, rehabilitation technologies, and human-machine interaction.

The project received support from the RGC Theme-based Research Scheme (TRS) project T45-701/22-R, the STI 2030-Major Projects, the National Natural Science Foundation of China, and the XPLORER Prize.

Here’s a link to and a citation for the paper,

A memristor-based adaptive neuromorphic decoder for brain–computer interfaces by Zhengwu Liu, Jie Mei, Jianshi Tang, Minpeng Xu, Bin Gao, Kun Wang, Sanchuang Ding, Qi Liu, Qi Qin, Weize Chen, Yue Xi, Yijun Li, Peng Yao, Han Zhao, Ngai Wong, He Qian, Bo Hong, Tzyy-Ping Jung, Dong Ming & Huaqiang Wu. Nature Electronics volume 8, pages 362–372 (2025) DOI: https://doi.org/10.1038/s41928-025-01340-2 Published online: 17 February 2025 Issue Date: April 2025

This paper is behind a paywall.

Words from the press release like “… human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems …” put me in mind of cyborgs.

Measuring brainwaves with temporary tattoo on scalp

Caption: EEG setup with e-tattoo electrodes Credit: Nanshu Lu

A December 2, 2024 news item on ScienceDaily announces development of a liquid ink that can measure brainwaves,

For the first time, scientists have invented a liquid ink that doctors can print onto a patient’s scalp to measure brain activity. The technology, presented December 2 [2024] in the Cell Press journal Cell Biomaterials, offers a promising alternative to the cumbersome process currently used for monitoring brainwaves and diagnosing neurological conditions. It also has the potential to enhance non-invasive brain-computer interface applications.

The December 2, 2024 Cell Press press release on Eurekalert, which originated the news item, claims this is a hair-friendly e-tattoo even though the model has a shaved head (perhaps that was for modeling purposes only?),

“Our innovations in sensor design, biocompatible ink, and high-speed printing pave the way for future on-body manufacturing of electronic tattoo sensors, with broad applications both within and beyond clinical settings,” says Nanshu Lu, the paper’s co-corresponding author at the University of Texas at Austin.

Electroencephalography (EEG) is an important tool for diagnosing a variety of neurological conditions, including seizures, brain tumors, epilepsy, and brain injuries. During a traditional EEG test, technicians measure the patient’s scalp with rulers and pencils, marking over a dozen spots where they will glue on electrodes, which are connected to a data-collection machine via long wires to monitor the patient’s brain activity. This setup is time consuming and cumbersome, and it can be uncomfortable for many patients, who must sit through the EEG test for hours.

Lu and her team have been pioneering the development of small sensors that track bodily signals from the surface of human skin, a technology known as electronic tattoos, or e-tattoos. Scientists have applied e-tattoos to the chest to measure heart activities, on muscles to measure how fatigued they are, and even under the armpit to measure components of sweat.

In the past, e-tattoos were usually printed on a thin layer of adhesive material before being transferred onto the skin, but this was only effective on hairless areas.

“Designing materials that are compatible with hairy skin has been a persistent challenge in e-tattoo technology,” Lu says. To overcome this, the team designed a type of liquid ink made of conductive polymers. The ink can flow through hair to reach the scalp, and once dried, it works as a thin-film sensor, picking up brain activity through the scalp.

Using a computer algorithm, the researchers can design the spots for EEG electrodes on the patient’s scalp. Then, they use a digitally controlled inkjet printer to spray a thin layer of the e-tattoo ink on to the spots. The process is quick, requires no contact, and causes no discomfort in patients, the researchers said.

The team printed e-tattoo electrodes onto the scalps of five participants with short hair. They also attached conventional EEG electrodes next to the e-tattoos. The team found that the e-tattoos performed comparably well at detecting brainwaves with minimal noise.

After six hours, the gel on the conventional electrodes started to dry out. Over a third of these electrodes failed to pick up any signal, although most the remaining electrodes had reduced contact with the skin, resulting in less accurate signal detection. The e-tattoo electrodes, on the other hand, showed stable connectivity for at least 24 hours.

Additionally, researchers tweaked the ink’s formula and printed e-tattoo lines that run down to the base of the head from the electrodes to replace the wires used in a standard EEG test. “This tweak allowed the printed wires to conduct signals without picking up new signals along the way,” says co-corresponding author Ximin He of the University of California, Los Angeles.

The team then attached much shorter physical wires between the tattoos to a small device that collects brainwave data. The team said that in the future, they plan to embed wireless data transmitters in the e-tattoos to achieve a fully wireless EEG process.

“Our study can potentially revolutionize the way non-invasive brain-computer interface devices are designed,” says co-corresponding author José Millán of the University of Texas at Austin. Brain-computer interface devices work by recording brain activities associated with a function, such as speech or movement, and use them to control an external device without having to move a muscle. Currently, these devices often involve a large headset that is cumbersome to use. E-tattoos have the potential to replace the external device and print the electronics directly onto a patient’s head, making brain-computer interface technology more accessible, Millán says.  

Here’s a link to and a citation for the paper,

On-scalp printing of personalized electroencephalography e-tattoos by Luize Scalco de Vasconcelos, Yichen Yan, Pukar Maharjan, Satyam Kumar, Minsu Zhang, Bowen Yao, Hongbian Li, Sidi Duan, Eric Li, Eric Williams, Sandhya Tiku, Pablo Vidal, R. Sergio Solorzano-Vargas, Wen Hong, Yingjie Du, Zixiao Liu, Fumiaki Iwane, Charles Block, Andrew T. Repetski, Philip Tan, Pulin Wang, Martin G. Martın, José del R. Millán, Ximin He, Nanshu Lu. Cell Biomaterials, 2024 DOI: 10.1016/j.celbio.2024.100004 Copyright: © 2024 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies

This is paper is open access but you are better off downloading the PDF version.

Brain-inspired (neuromorphic) wireless system for gathering data from sensors the size of a grain of salt

This is what a sensor the size of a grain of salt looks like,

Caption: The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. Credit: Nick Dentamaro/Brown University

A March 19, 2024 news item on Nanowerk announces this research from Brown University (Rhode Island, US), Note: A link has been removed,

Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.

Writing in Nature Electronics (“An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors”), the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.

One of the potential applications is for brain (neural) implants,

Caption: Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt. Credit: Nick Dentamaro/Brown University

A March 19, 2024 Brown University news release (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.

“Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”

This radiofrequency [sic] transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.

The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.

“We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”

That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

“This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”

The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.

The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.

The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.

“These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”

The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.

The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.

“The current work provides a methodology we can further build on,” Lee said.

Here’s a link to and a citation for the study,

An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors by Jihun Lee, Ah-Hyoung Lee, Vincent Leung, Farah Laiwalla, Miguel Angel Lopez-Gordo, Lawrence Larson & Arto Nurmikko. Nature Electronics volume 7, pages 313–324 (2024) DOI: https://doi.org/10.1038/s41928-024-01134-y Published: 19 March 2024 Issue Date: April 2024

This paper is behind a paywall.

Prior to this, 2021 seems to have been a banner year for Nurmikko’s lab. There’s this August 12, 2021 Brown University news release touting publication of a then new study in Nature Electronics and I have an April 2, 2021 post, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI),” touting an earlier 2021 published study from the lab.

Implantable brain-computer interface collaborative community (iBCI-CC) launched

That’s quite a mouthful, ‘implantable brain-computer interface collaborative community (iBCI-CC). I assume the organization will be popularly known by its abbreviation.`A March 11, 2024 Mass General Brigham news release (also on EurekAlert) announces the iBCI-CC’s launch, Note: Mass stands for Massachusetts,

Mass General Brigham is establishing the Implantable Brain-Computer Interface Collaborative Community (iBCI-CC). This is the first Collaborative Community in the clinical neurosciences that has participation from the U.S. Food and Drug Administration (FDA).

BCIs are devices that interface with the nervous system and use software to interpret neural activity. Commonly, they are designed for improved access to communication or other technologies for people with physical disability. Implantable BCIs are investigational devices that hold the promise of unlocking new frontiers in restorative neurotechnology, offering potential breakthroughs in neurorehabilitation and in restoring function for people living with neurologic disease or injury.

The iBCI-CC (https://www.ibci-cc.org/) is a groundbreaking initiative aimed at fostering collaboration among diverse stakeholders to accelerate the development, safety and accessibility of iBCI technologies. The iBCI-CC brings together researchers, clinicians, medical device manufacturers, patient advocacy groups and individuals with lived experience of neurological conditions. This collaborative effort aims to propel the field of iBCIs forward by employing harmonized approaches that drive continuous innovation and ensure equitable access to these transformative technologies.

One of the first milestones for the iBCI-CC was to engage the participation of the FDA. “Brain-computer interfaces have the potential to restore lost function for patients suffering from a variety of neurological conditions. However, there are clinical, regulatory, coverage and payment questions that remain, which may impede patient access to this novel technology,” said David McMullen, M.D., Director of the Office of Neurological and Physical Medicine Devices in the FDA’s Center for Devices and Radiological Health (CDRH), and FDA member of the iBCI-CC. “The IBCI-CC will serve as an open venue to identify, discuss and develop approaches for overcoming these hurdles.”

The iBCI-CC will hold regular meetings open both to its members and the public to ensure inclusivity and transparency. Mass General Brigham will serve as the convener of the iBCI-CC, providing administrative support and ensuring alignment with the community’s objectives.

Over the past year, the iBCI-CC was organized by the interdisciplinary collaboration of leaders including Leigh Hochberg, MD, PhD, an internationally respected leader in BCI development and clinical testing and director of the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital; Jennifer French, MBA, executive director of the Neurotech Network and a Paralympic silver medalist; and Joe Lennerz, MD, PhD, a regulatory science expert and director of the Pathology Innovation Collaborative Community. These three organizers lead a distinguished group of Charter Signatories representing a diverse range of expertise and organizations.

“As a neurointensive care physician, I know how many patients with neurologic disorders could benefit from these devices,” said Dr. Hochberg. “Increasing discoveries in academia and the launch of multiple iBCI and related neurotech companies means that the time is right to identify common goals and metrics so that iBCIs are not only safe and effective, but also have thoroughly considered the design and function preferences of the people who hope to use them”.

Jennifer French, said, “Bringing diverse perspectives together, including those with lived experience, is a critical component to help address complex issues facing this field.” French has decades of experience working in the neurotech and patient advocacy fields. Living with a spinal cord injury, she also uses an implanted neurotech device for daily functions. “This ecosystem of neuroscience is on the cusp to collectively move the field forward by addressing access to the latest groundbreaking technology, in an equitable and ethical way. We can’t wait to engage and recruit the broader BCI community.”

Joe Lennerz, MD, PhD, emphasized, “Engaging in pre-competitive initiatives offers an often-overlooked avenue to drive meaningful progress. The collaboration of numerous thought leaders plays a pivotal role, with a crucial emphasis on regulatory engagement to unlock benefits for patients.”

The iBCI-CC is supported by key stakeholders within the Mass General Brigham system. Merit Cudkowicz, MD, MSc, chair of the Neurology Department, director of the Sean M. Healey and AMG Center for ALS at Massachusetts General Hospital, and Julianne Dorn Professor of Neurology at Harvard Medical School, said, “There is tremendous excitement in the ALS [amyotrophic lateral sclerosis, or Lou Gehrig’s disease] community for new devices that could ease and improve the ability of people with advanced ALS to communicate with their family, friends, and care partners. This important collaborative community will help to speed the development of a new class of neurologic devices to help our patients.”

Bailey McGuire, program manager of strategy and operations at Mass General Brigham’s Data Science Office, said, “We are thrilled to convene the iBCI-CC at Mass General Brigham’s DSO. By providing an administrative infrastructure, we want to help the iBCI-CC advance regulatory science and accelerate the availability of iBCI solutions that incorporate novel hardware and software that can benefit individuals with neurological conditions. We’re excited to help in this incredible space.”

For more information about the iBCI-CC, please visit https://www.ibci-cc.org/.

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

About the iBCI-CC Organizers:

Leigh Hochberg, MD, PhD is a neurointensivist at Massachusetts General Hospital’s Department of Neurology, where he directs the MGH Center for Neurotechnology and Neurorecovery. He is also the IDE Sponsor-Investigator and Directorof the BrainGate clinical trials, conducted by a consortium of scientists and clinicians at Brown, Emory, MGH, VA Providence, Stanford, and UC-Davis; the L. Herbert Ballou University Professor of Engineering and Professor of Brain Science at Brown University; Senior Lecturer on Neurology at Harvard Medical School; and Associate Director, VA RR&D Center for Neurorestoration and Neurotechnology in Providence.

Jennifer French, MBA, is the Executive Director of Neurotech Network, a nonprofit organization that focuses on education and advocacy of neurotechnologies. She serves on several Boards including the IEEE Neuroethics Initiative, Institute of Neuroethics, OpenMind platform, BRAIN Initiative Multi-Council and Neuroethics Working Groups, and the American Brain Coalition. She is the author of On My Feet Again (Neurotech Press, 2013) and is co-author of Bionic Pioneers (Neurotech Press, 2014). French lives with tetraplegia due to a spinal cord injury. She is an early user of an experimental implanted neural prosthesis for paralysis and is the Past-President and Founding member of the North American SCI Consortium.

Joe Lennerz, MD PhD, serves as the Chief Scientific Officer at BostonGene, an AI analytics and genomics startup based in Boston. Dr. Lennerz obtained a PhD in neurosciences, specializing in electrophysiology. He works on biomarker development and migraine research. Additionally, he is the co-founder and leader of the Pathology Innovation Collaborative Community, a regulatory science initiative focusing on diagnostics and software as a medical device (SaMD), convened by the Medical Device Innovation Consortium. He also serves as the co-chair of the federal Clinical Laboratory Fee Schedule (CLFS) advisory panel to the Centers for Medicare & Medicaid Services (CMS).

it’s been a while since I’ve come across BrainGate (see Leigh Hochberg bio in the above news release), which was last mentioned here in an April 2, 2021 posting, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI).”

Here are two of my more recent postings about brain-computer interfaces,

This next one is an older posting but perhaps the most relevant to the announcement of this collaborative community’s purpose,

There’s a lot more on brain-computer interfaces (BCI) here, just use the term in the blog search engine.

Digi, Nano, Bio, Neuro – why should we care more about converging technologies?

Personality in focus: the convergence of biology and computer technology could make extremely sensitive data available. (Image: by-​studio / AdobeStock) [downloaded from https://ethz.ch/en/news-and-events/eth-news/news/2024/05/digi-nano-bio-neuro-or-why-we-should-care-more-about-converging-technologies.html]

I gave a guest lecture some years ago where I mentioned that I thought the real issue with big data and AI (artificial intelligence) lay in combining them (or convergence). These days, it seems I was insufficiently imaginative as researchers from ETH Zurich have taken the notion much further.

From a May 7, 2024 ETH Zurich press release (also on EurekAlert), Note: You’ll see in the ‘References’ some extra words, ‘external page’ is self-explanatory but ‘call made’ remains a mystery to me,

In my research, I [Dirk Helbing, Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences and associated with the Department of Computer Science at ETH Zurich.] deal with the consequences of digitalisation for people, society and democracy. In this context, it is also important to keep an eye on their convergence in computer and life sciences – i.e. what becomes possible when digital technologies grow increasingly together with biotechnology, neurotechnology and nanotechnology.

Converging technologies are seen as a breeding ground for far-​reaching innovations. However, they are blurring the boundaries between the physical, biological and digital worlds. Conventional regulations are becoming ineffective as a result.

In a joint study I conducted with my co-​author Marcello Ienca, we have recently examined the risks and societal challenges of technological convergence – and concluded that the effects for individuals and society are far-​reaching.

We would like to draw attention to the challenges and risks of converging technologies and explain why we consider it necessary to accompany technological developments internationally with strict regulations.

For several years now, everyone has been able to observe, within the context of digitalisation, the consequences of leaving technological change to market forces alone without effective regulation.

Misinformation and manipulation on the web

The Digital Manifesto was published in 2015 – almost ten years ago.1 Nine European experts, including one from ETH Zurich, issued an urgent warning against scoring, i.e. the evaluation of people, and big nudging,2 a subtle form of digital manipulation. The latter is based on personality profiles created using cookies and other surveillance data. A little later, the Cambridge Analytica scandal alerted the world to how the data analysis company had been using personalised ads (microtargeting) in an attempt to manipulate voting behaviour in democratic elections.

This has brought democracies around the world under considerable pressure. Propaganda, fake news and hate speech are polarising and sowing doubt, while privacy is on the decline. We are in the midst of an international information war for control of our minds, in which advertising companies, tech corporations, secret services and the military are fighting to exert an influence on our mindset and behaviour. The European Union has adopted the AI Act in an attempt to curb these dangers.

However, digital technologies have developed at a breathtaking pace, and new possibilities for manipulation are already emerging. The merging of digital and nanotechnology with modern biotechnology and neurotechnology makes revolutionary applications possible that had been hardly imaginable before.

Microrobots for precision medicine

In personalised medicine, for example, the advancing miniaturisation of electronics is making it increasingly possible to connect living organisms and humans with networked sensors and computing power. The WEF [World Economic Forum] proclaimed the “Internet of Bodies” as early as 2020.3, 4

One example that combines conventional medication with a monitoring function is digital pills. These could control medication and record a patient’s physiological data (see this blog post).

Experts expect sensor technology to reach the nanoscale. Magnetic nanoparticles or nanoelectronic components, i.e. tiny particles invisible to the naked eye with a diameter up to 100 nanometres, would make it possible to transport active substances, interact with cells and record vast amounts of data on bodily functions. If introduced into the body, it is hoped that diseases could be detected at an early stage and treated in a personalised manner. This is often referred to as high-​precision medicine.

Nano-​electrodes record brain function

Miniaturised electrodes that can simultaneously measure and manipulate the activity of thousands of neurons coupled with ever-​improving AI tools for the analysis of brain signals are approaches that are now leading to much-​discussed advances in the brain-​computer interface. Brain activity mapping is also on the agenda. Thanks to nano-​neurotechnology, we could soon envisage smartphones and other AI applications being controlled directly by thoughts.

“Long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.” Dirk Helbling

Large-​scale projects to map the human brain are also likely to benefit from this.5 In future, brain activity mapping will not only be able to read our thoughts and feelings but also make them possible of being influenced remotely – the latter would probably be a lot more effective than previous manipulation methods like big nudging.

However, conventional electrodes are not suitable for permanent connection between cells and electronics – this requires durable and biocompatible interfaces. This has given rise to the suggestion of transmitting signals optogenetically, i.e. to control genes in special cells with light pulses.6 This would make the implementation of amazing circuits possible (see this ETH News article [November 11, 2014 press release] “Controlling genes with thoughts” ).

The downside of convergence

Admittedly, the applications mentioned above may sound futuristic, with most of them still visions or in their early stages of development. However, a lot of research is being conducted worldwide and at full speed. The military is also interested in using converging technologies for its own purposes. 7, 8

The downside of convergence is the considerable risks involved, such as state or private players gaining access to highly sensitive data and misusing it to monitor and influence people. The more connected our bodies become, the more vulnerable we will be to cybercrime and hacking. It cannot be ruled out that military applications exist already.5 One thing is clear, however: long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.

“We need to regain control of our personal data. To do this, we need genuine informational self-​determination.” Dirk Helbling

The problem is that existing regulations are specific and insufficient to keep technological convergence in check. But how are we to retain control over our lives if it becomes increasingly possible to influence our thoughts, feelings and decisions by digital means?

Converging global regulation is needed

In our recent paper we conclude that any regulation of converging technologies would have to be based on converging international regulations. Accordingly, we outline a new global regulatory framework and propose ten governance principles to close the looming regulatory gap. 9

The framework emphasises the need for safeguards to protect bodily and mental functions from unauthorised interference and to ensure personal integrity and privacy by, for example. establishing neurorights.

To minimise risks and prevent abuse, future regulations should be inclusive, transparent and trustworthy. The principle of participatory governance is key, which would have to involve all the relevant groups and ensure that the concerns of affected minorities are also taken into account in decision-​making processes.

Finally, we need to regain control of our personal data. To accomplish this, we need genuine informational self-​determination. This would also have to apply to the digital twins of our body and personality, because they can be used to hack our health and our way of thinking – for good or for bad.10

With our contribution, we would like to initiate public debate about converging technologies. Despite its major relevance, we believe that too little attention is being paid to this topic. Continuous discourse on benefits, risks and sensible rules can help to steer technological convergence in such a way that it serves people instead of harming them.

Dirk Helbing wrote this article together with external page Marcello Ienca call_made, who previously worked at ETH Zurich and EPFL and is now Assistant Professor of Ethics of AI and Neuroscience at the Technical University of Munich.

References

1 Digital-​Manifest: external page Digitale Demokratie statt Datendiktatur call_made (2015) Spektrum der Wissenschaft

2 external page Sie sind das Ziel! call_made (2024) Schweizer Monat

3 external page The Internet of Bodies Is Here: Tackling new challenges of technology governance call_made (2020) World Economic Forum

4 external page Tracking how our bodies work could change our lives call_made (2020) World Economic Forum

5 external page Nanotools for Neuroscience and Brain Activity Mapping call_made (2013) ACS Nano

6 external page Innovationspotenziale der Mensch-​Maschine-Interaktion call_made (2016) Deutsche Akademie der Technikwissenschaften

7 external page Human Augmentation – The Dawn of a New Paradigm. A strategic implications project call_made (2021) UK Ministry of Defence

8 external page Behavioural change as the core of warfighting call_made (2017) Militaire Spectator

9 Helbing D, Ienca M: external page Why converging technologies need converging international regulation call_made (2024) Ethics and Information Technology

10 external page Who is Messing with Your Digital Twin? Body, Mind, and Soul for Sale? call_made Dirk Helbing TEDx Talk (2023)

Here’s a second link to and citation for the paper,

Why converging technologies need converging international regulation by Dirk Helbing & Marcello Ienca. Ethics and Information Technology Volume 26, article number 15, (2024) DOI: 10.1007/s10676-024-09756-8 Published: 28 February 2024

This paper is open access.

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

Youthful Canadian inventors win awards

Two teenagers stand next two each other displaying their inventions. One holds a laptop, while the other holds a wireless headset.
Vinny Gu, left, and Anush Mutyala, right, hope to continue to work to improve their inventions. (Niza Lyapa Nondo/CBC)

This November 28, 2023 article by Philip Drost for the Canadian Broadcasting Corporation’s (CBC) The Current radio programme highl8ights two youthful inventors, Note: Links have been removed,

Anush Mutyala [emphasis mine] may only be in Grade 12, but he already has hopes that his innovations and inventions will rival that of Elon Musk.

“I always tell my friends something that would be funny is if I’m competing head-to-head with Elon Musk in the race to getting people [neural] implants,” Mutyala told Matt Galloway on The Current

Mutyala, a student at Chinguacousy Secondary School in Brampton, Ont., created a brain imaging system that he says opens the future for permanent wireless neural implants. 

For his work, he received an award from Youth Science Canada at the National Fair in 2023, which highlights young people pushing innovation. 

Mutyala wanted to create a way for neural implants to last longer. Implants can help people hear better, or move parts of the body they otherwise couldn’t, but neural implants in particular face issues with regard to power consumption, and traditionally must be replaced by surgery after their batteries die. That can be every five years. 

But Mutyala thinks his system, Enerspike, can change that. The algorithm he designed lowers the energy consumption needed for implants to process and translate brain signals into making a limb move.

“You would essentially never need to replace wireless implants again for the purpose of battery replacement,” said Mutyala. 

Mutyala was inspired by Stephen Hawking, who famously spoke with the use of a speech synthesizer.

“What if we used technology like this and we were able to restore his complete communication ability? He would have been able to communicate at a much faster rate and he would have had a much greater impact on society,” said Mutyala. 

… Mutyala isn’t the only innovator. Vinny Gu [emphasis mine], a Grade 11 student at Markville Secondary School in Markham, Ont., also received an award for creating DermaScan, an online application that can look at a photo and predict whether the person photographed has skin cancer or not.

“There has [sic] been some attempts at this problem in the past. However, they usually result in very low accuracy. However, I incorporated a technology to help my model better detect the minor small details in the image in order for it to get a better prediction,” said Gu. 

He says it doesn’t replace visiting a dermatologist — but it can give people an option to do pre-screenings with ease, which can help them decide if they need to go see a dermatologist. He says his model is 90-per-cent accurate. 

He is currently testing Dermascan, and he hopes to one day make it available for free to anyone who needs it. 

Drost’s November 28, 2023 article hosts an embedded audio file of the radio interview and more.

You can find out about Anoush Mutyala and his work on his LinkedIn profile (in a addition to being a high school student, since October 2023, he’s also a neuromorphics researcher at York University). If my link to his profile fails, search Mutyala’s name online and access his public page at the LinkedIn website. There’s something else, Mutyala has an eponymous website.

My online searches for more about Vinny (or Vincent) Gu were not successful.

You can find a bit more information about Mutyala’s Enerspike here and Gu’s DermaScan here. Youth Science Canada can be found here.

Not to forget, there’s grade nine student Arushi Nath and her work on planetary defence, which is being recognized in a number of ways. (See my November 17, 2023 posting, Arushi Nath gives the inside story about the 2023 Natural Sciences and Engineering Research Council of Canada (NSERC) Awards and my November 23, 2023 posting, Margot Lee Shetterly [Hidden Figures author] in Toronto, Canada and a little more STEM [science, technology, engineering, and mathematics] information.) I must say November 2023 has been quite the banner month for youth science in Canada.

Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023

The Canadian Science Policy Centre (CSPC) sent a May 11, 2023 notice (via email) about an upcoming event but first, congratulations (Bravo!) are in order,

The Science Meets Parliament [SMP] Program 2023 is now complete and was a huge success. 43 Delegates from across Canada met with 62 Parliamentarians from across the political spectrum on the Hill on May 1-2, 2023.

The SMP Program is championed by CSPC and Canada’s Chief Science Advisor, Dr. Mona Nemer [through the Office of the Chief Science Advisor {OCSA}].

This Program would not have been possible without the generous support of our sponsors: The Royal Military College of Canada, The Stem Cell Network, and the University of British Columbia.

There are 443 seats in Canada’s Parliament with 338 in the House of Commons and 105 in the Senate and 2023 is the third time the SMP programme has been offered. (It was previously held in 2018 and 2022 according to the SMP program page.)

The Canadian programme is relatively new compared to Australia where they’ve had a Science Meets Parliament programme since 1999 (according to a March 20, 2017 essay by Ken Baldwin, Director of Energy Change Institute at Australian National University for The Conversation). The Scottish have had a Science and the Parliament programme since 2000 (according to this 2022 event notice on the Royal Society of Chemistry’s website).

By comparison to the other two, the Canadian programme is a toddler. (We tend not to recognize walking for the major achievement it is.) So, bravo to the CSPC and OCSA on getting 62 Parliamentarians to make time in their schedules to meet a scientist.

Responsible neurotechnology innovation?

From the Canadian Strategies for Responsible Neurotechnology Innovation event page on the CSPC website,

Advances in neurotechnology are redefining the possibilities of improving neurologic health and mental wellbeing, but related ethical, legal, and societal concerns such as privacy of brain data, manipulation of personal autonomy and agency, and non-medical and dual uses are increasingly pressing concerns [emphasis mine]. In this regard, neurotechnology presents challenges not only to Canada’s federal and provincial health care systems, but to existing laws and regulations that govern responsible innovation. In December 2019, just before the pandemic, the OECD [Organisation for Economic Cooperation and Development] Council adopted a Recommendation on Responsible Innovation in Neurotechnology. It is now urging that member states develop right-fit implementation strategies.

What should these strategies look like for Canada? We will propose and discuss opportunities that balance and leverage different professional and governance approaches towards the goal of achieving responsible innovation for the current state of the art, science, engineering, and policy, and in anticipation of the rapid and vast capabilities expected for neurotechnology in the future by and for this country.

Link to the full OECD Recommendation on Responsible Innovation in Neurotechnology

Date: May 16 [2023]

Time: 12:00 pm – 1:30 pm EDT

Event Category: Virtual Session [on Zoom]

Registration Page: https://us02web.zoom.us/webinar/register/WN_-g8d1qubRhumPSCQi6WUtA

The panelists are:

Dr. Graeme Moffat
Neurotechnology entrepreneur & Senior Fellow, Munk School of Global Affairs & Public Policy [University of Toronto]

Dr. Graeme Moffat is a co-founder and scientist with System2 Neurotechnology. He previously was Chief Scientist and VP of Regulatory Affairs at Interaxon, Chief Scientist with ScienceScape (later Chan-Zuckerberg Meta), and a research engineer at Neurelec (a division of Oticon Medical). He served as Managing Editor of Frontiers in Neuroscience, the largest open access scholarly journal series in the field of neuroscience. Dr. Moffat is a Senior Fellow at the Munk School of Global Affairs and Public Policy and an advisor to the OECD’s neurotechnology policy initiative.

Professor Jennifer Chandler
Professor of Law at the Centre for Health Law, Policy and Ethics, University of Ottawa

Jennifer Chandler is Professor of Law at the Centre for Health Law, Policy and Ethics, University of Ottawa. She leads the “Neuroethics Law and Society” Research Pillar for the Brain Mind Research Institute and sits on its Scientific Advisory Council. Her research focuses on the ethical, legal and policy issues in brain sciences and the law. She teaches mental health law and neuroethics, tort law, and medico-legal issues. She is a member of the advisory board for CIHR’s Institute for Neurosciences, Mental Health and Addiction (IMNA) and serves on international editorial boards in the field of law, ethics and neuroscience, including Neuroethics, the Springer Book Series Advances in Neuroethics, and the Palgrave-MacMillan Book Series Law, Neuroscience and Human Behavior. She has published widely in legal, bioethical and health sciences journals and is the co-editor of the book Law and Mind: Mental Health Law and Policy in Canada (2016). Dr. Chandler brings a unique perspective to this panel as her research focuses on the ethical, legal and policy issues at the intersection of the brain sciences and the law. She is active in Canadian neuroscience research funding policy, and regularly contributes to Canadian governmental policy on contentious matters of biomedicine.

Ian Burkhart
Neurotech Advocate and Founder of BCI [brain-computer interface] Pioneers Coalition

Ian is a C5 tetraplegic [also known as quadriplegic] from a diving accident in 2010. He participated in a ground-breaking clinical trial using a brain-computer interface to control muscle stimulation. He is the founder of the BCI Pioneers Coalition, which works to establish ethics, guidelines and best practices for future patients, clinicians, and commercial entities engaging with BCI research. Ian serves as Vice President of the North American Spinal Cord Injury Consortium and chairs their project review committee. He has also worked with Unite2Fight Paralysis to advocate for $9 million of SCI research in his home state of Ohio. Ian has been a Reeve peer mentor since 2015 and helps lead two local SCI networking groups. As the president of the Ian Burkhart Foundation, he raises funds for accessible equipment for the independence of others with SCI. Ian is also a full-time consultant working with multiple medical device companies.

Andrew Atkinson
Manager, Emerging Science Policy, Health Canada

Andrew Atkinson is the Manager of the Emerging Sciences Policy Unit under the Strategic Policy Branch of Health Canada. He oversees coordination of science policy issues across the various regulatory and research programs under the mandate of Health Canada. Prior to Health Canada, he was a manager under Environment Canada’s CEPA new chemicals program, where he oversaw chemical and nanomaterial risk assessments, and the development of risk assessment methodologies. In parallel to domestic work, he has been actively engaged in ISO [International Organization for Standardization and OECD nanotechnology efforts.

Andrew is currently a member of the Canadian delegation to the OECD Working Party on Biotechnology, Nanotechnology and Converging Technologies (BNCT). BNCT aims to contribute original policy analysis on emerging science and technologies, such as gene editing and neurotechnology, including messaging to the global community, convening key stakeholders in the field, and making ground-breaking proposals to policy makers.

Professor Judy Illes
Professor, Division of Neurology, Department of Medicine, Faculty of Medicine, UBC [University of British Columbia]

Dr. Illes is Professor of Neurology and Distinguished Scholar in Neuroethics at the University of British Columbia. She is the Director of Neuroethics Canada, and among her many leadership positions in Canada, she is Vice Chair of the Canadian Institutes of Health Research (CIHR) Advisory Board of the Institute on Neuroscience, Mental Health and Addiction (INMHA), and chair of the International Brain Initiative (www.internationalbraininitiative.org; www.canadianbrain.ca), Director at Large of the Canadian Academy of Health Sciences, and a member of the Board of Directors of the Council of Canadian Academies.

Dr. Illes is a world-renown expert whose research, teaching and outreach are devoted to ethical, legal, social and policy challenges at the intersection of the brain sciences and biomedical ethics. She has made ground breaking contributions to neuroethical thinking for neuroscience discovery and clinical translation across the life span, including in entrepreneurship and in the commercialization of health care. Dr. Illes has a unique and comprehensive overview of the field of neurotechnology and the relevant sectors in Canada.

One concern I don’t see mentioned is bankruptcy (in other words, what happens if the company that made your neural implant goes bankrupt?) either in the panel description or in the OECD recommendation. My April 5, 2022 posting “Going blind when your neural implant company flirts with bankruptcy (long read)” explored that topic and while many of the excerpted materials present a US perspective, it’s easy to see how it could also apply in Canada and elsewhere.

For those of us on the West Coast, this session starts at 9 am. Enjoy!

*June 20, 2023: This sentence changed (We tend not to recognize that walking for the major achievement it is.) to We tend not to recognize walking for the major achievement it is.

Better recording with flexible backing on a brain-computer interface (BCI)

This work has already been patented, from a March 15, 2022 news item on ScienceDaily,

Engineering researchers have invented an advanced brain-computer interface with a flexible and moldable backing and penetrating microneedles. Adding a flexible backing to this kind of brain-computer interface allows the device to more evenly conform to the brain’s complex curved surface and to more uniformly distribute the microneedles that pierce the cortex. The microneedles, which are 10 times thinner than the human hair, protrude from the flexible backing, penetrate the surface of the brain tissue without piercing surface venules, and record signals from nearby nerve cells evenly across a wide area of the cortex.

This novel brain-computer interface has thus far been tested in rodents. The details were published online on February 25 [2022] in the journal Advanced Functional Materials. This work is led by a team in the lab of electrical engineering professor Shadi Dayeh at the University of California San Diego, together with researchers at Boston University led by biomedical engineering professor Anna Devor.

Caption: Artist rendition of the flexible, conformable, transparent backing of the new brain-computer interface with penetrating microneedles developed by a team led by engineers at the University of California San Diego in the laboratory of electrical engineering professor Shadi Dayeh. The smaller illustration at bottom left shows the current technology in experimental use called Utah Arrays. Credit: Shadi Dayeh / UC San Diego / SayoStudio

A March 14, 2022 University of California at San Diego news release (also on EurekAlert but published March 15, 2022), which originated the news item, delves further into the topic,

This new brain-computer interface is on par with and outperforms the “Utah Array,” which is the existing gold standard for brain-computer interfaces with penetrating microneedles. The Utah Array has been demonstrated to help stroke victims and people with spinal cord injury. People with implanted Utah Arrays are able to use their thoughts to control robotic limbs and other devices in order to restore some everyday activities such as moving objects.

The backing of the new brain-computer interface is flexible, conformable, and reconfigurable, while the Utah Array has a hard and inflexible backing. The flexibility and conformability of the backing of the novel microneedle-array favors closer contact between the brain and the electrodes, which allows for better and more uniform recording of the brain-activity signals. Working with rodents as model species, the researchers have demonstrated stable broadband recordings producing robust signals for the duration of the implant which lasted 196 days. 

In addition, the way the soft-backed brain-computer interfaces are manufactured allows for larger sensing surfaces, which means that a significantly larger area of the brain surface can be monitored simultaneously. In the Advanced Functional Materials paper, the researchers demonstrate that a penetrating microneedle array with 1,024 microneedles successfully recorded signals triggered by precise stimuli from the brains of rats. This represents ten times more microneedles and ten times the area of brain coverage, compared to current technologies.

Thinner and transparent backings

These soft-backed brain-computer interfaces are thinner and lighter than the traditional, glass backings of these kinds of brain-computer interfaces. The researchers note in their Advanced Functional Materials paper that light, flexible backings may reduce irritation of the brain tissue that contacts the arrays of sensors. 

The flexible backings are also transparent. In the new paper, the researchers demonstrate that this transparency can be leveraged to perform fundamental neuroscience research involving animal models that would not be possible otherwise. The team, for example, demonstrated simultaneous electrical recording from arrays of penetrating micro-needles as well as optogenetic photostimulation.

Two-sided lithographic manufacturing

The flexibility, larger microneedle array footprints, reconfigurability and transparency of the backings of the new brain sensors are all thanks to the double-sided lithography approach the researchers used. 

Conceptually, starting from a rigid silicon wafer, the team’s manufacturing process allows them to build microscopic circuits and devices on both sides of the rigid silicon wafer. On one side, a flexible, transparent film is added on top of the silicon wafer. Within this film, a bilayer of titanium and gold traces is embedded so that the traces line up with where the needles will be manufactured on the other side of the silicon wafer. 

Working from the other side, after the flexible film has been added, all the silicon is etched away, except for free-standing, thin, pointed columns of silicon. These pointed columns of silicon are, in fact, the microneedles, and their bases align with the titanium-gold traces within the flexible layer that remains after the silicon has been etched away. These titanium-gold traces are patterned via standard and scalable microfabrication techniques, allowing scalable production with minimal manual labor. The manufacturing process offers the possibility of flexible array design and scalability to tens of thousands of microneedles.  

Toward closed-loop systems

Looking to the future, penetrating microneedle arrays with large spatial coverage will be needed to improve brain-machine interfaces to the point that they can be used in “closed-loop systems” that can help individuals with severely limited mobility. For example, this kind of closed-loop system might offer a person using a robotic hand real-time tactical feedback on the objects the robotic hand is grasping.  

Tactile sensors on the robotic hand would sense the hardness, texture, and weight of an object. This information recorded by the sensors would be translated into electrical stimulation patterns which travel through wires outside the body to the brain-computer interface with penetrating microneedles. These electrical signals would provide information directly to the person’s brain about the hardness, texture, and weight of the object. In turn, the person would adjust their grasp strength based on sensed information directly from the robotic arm. 

This is just one example of the kind of closed-loop system that could be possible once penetrating microneedle arrays can be made larger to conform to the brain and coordinate activity across the “command” and “feedback” centers of the brain.

Previously, the Dayeh laboratory invented and demonstrated the kinds of tactile sensors that would be needed for this kind of application, as highlighted in this video.

Pathway to commercialization

The advanced dual-side lithographic microfabrication processes described in this paper are patented (US 10856764). Dayeh co-founded Precision Neurotek Inc. to translate technologies innovated in his laboratory to advance state of the art in clinical practice and to advance the fields of neuroscience and neurophysiology.

Here’s a link to and a citation for the paper,

Scalable Thousand Channel Penetrating Microneedle Arrays on Flex for Multimodal and Large Area Coverage BrainMachine Interfaces by Sang Heon Lee, Martin Thunemann, Keundong Lee, Daniel R. Cleary, Karen J. Tonsfeldt, Hongseok Oh, Farid Azzazy, Youngbin Tchoe, Andrew M. Bourhis, Lorraine Hossain, Yun Goo Ro, Atsunori Tanaka, Kıvılcım Kılıç, Anna Devor, Shadi A. Dayeh. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202112045 First published (online): 25 February 2022

This paper is open access.