Category Archives: human enhancement

Brain-inspired (neuromorphic) wireless system for gathering data from sensors the size of a grain of salt

This is what a sensor the size of a grain of salt looks like,

Caption: The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. Credit: Nick Dentamaro/Brown University

A March 19, 2024 news item on Nanowerk announces this research from Brown University (Rhode Island, US), Note: A link has been removed,

Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.

Writing in Nature Electronics (“An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors”), the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.

One of the potential applications is for brain (neural) implants,

Caption: Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt. Credit: Nick Dentamaro/Brown University

A March 19, 2024 Brown University news release (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.

“Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”

This radiofrequency [sic] transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.

The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.

“We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”

That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

“This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”

The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.

The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.

The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.

“These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”

The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.

The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.

“The current work provides a methodology we can further build on,” Lee said.

Here’s a link to and a citation for the study,

An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors by Jihun Lee, Ah-Hyoung Lee, Vincent Leung, Farah Laiwalla, Miguel Angel Lopez-Gordo, Lawrence Larson & Arto Nurmikko. Nature Electronics volume 7, pages 313–324 (2024) DOI: https://doi.org/10.1038/s41928-024-01134-y Published: 19 March 2024 Issue Date: April 2024

This paper is behind a paywall.

Prior to this, 2021 seems to have been a banner year for Nurmikko’s lab. There’s this August 12, 2021 Brown University news release touting publication of a then new study in Nature Electronics and I have an April 2, 2021 post, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI),” touting an earlier 2021 published study from the lab.

Implantable brain-computer interface collaborative community (iBCI-CC) launched

That’s quite a mouthful, ‘implantable brain-computer interface collaborative community (iBCI-CC). I assume the organization will be popularly known by its abbreviation.`A March 11, 2024 Mass General Brigham news release (also on EurekAlert) announces the iBCI-CC’s launch, Note: Mass stands for Massachusetts,

Mass General Brigham is establishing the Implantable Brain-Computer Interface Collaborative Community (iBCI-CC). This is the first Collaborative Community in the clinical neurosciences that has participation from the U.S. Food and Drug Administration (FDA).

BCIs are devices that interface with the nervous system and use software to interpret neural activity. Commonly, they are designed for improved access to communication or other technologies for people with physical disability. Implantable BCIs are investigational devices that hold the promise of unlocking new frontiers in restorative neurotechnology, offering potential breakthroughs in neurorehabilitation and in restoring function for people living with neurologic disease or injury.

The iBCI-CC (https://www.ibci-cc.org/) is a groundbreaking initiative aimed at fostering collaboration among diverse stakeholders to accelerate the development, safety and accessibility of iBCI technologies. The iBCI-CC brings together researchers, clinicians, medical device manufacturers, patient advocacy groups and individuals with lived experience of neurological conditions. This collaborative effort aims to propel the field of iBCIs forward by employing harmonized approaches that drive continuous innovation and ensure equitable access to these transformative technologies.

One of the first milestones for the iBCI-CC was to engage the participation of the FDA. “Brain-computer interfaces have the potential to restore lost function for patients suffering from a variety of neurological conditions. However, there are clinical, regulatory, coverage and payment questions that remain, which may impede patient access to this novel technology,” said David McMullen, M.D., Director of the Office of Neurological and Physical Medicine Devices in the FDA’s Center for Devices and Radiological Health (CDRH), and FDA member of the iBCI-CC. “The IBCI-CC will serve as an open venue to identify, discuss and develop approaches for overcoming these hurdles.”

The iBCI-CC will hold regular meetings open both to its members and the public to ensure inclusivity and transparency. Mass General Brigham will serve as the convener of the iBCI-CC, providing administrative support and ensuring alignment with the community’s objectives.

Over the past year, the iBCI-CC was organized by the interdisciplinary collaboration of leaders including Leigh Hochberg, MD, PhD, an internationally respected leader in BCI development and clinical testing and director of the Center for Neurotechnology and Neurorecovery at Massachusetts General Hospital; Jennifer French, MBA, executive director of the Neurotech Network and a Paralympic silver medalist; and Joe Lennerz, MD, PhD, a regulatory science expert and director of the Pathology Innovation Collaborative Community. These three organizers lead a distinguished group of Charter Signatories representing a diverse range of expertise and organizations.

“As a neurointensive care physician, I know how many patients with neurologic disorders could benefit from these devices,” said Dr. Hochberg. “Increasing discoveries in academia and the launch of multiple iBCI and related neurotech companies means that the time is right to identify common goals and metrics so that iBCIs are not only safe and effective, but also have thoroughly considered the design and function preferences of the people who hope to use them”.

Jennifer French, said, “Bringing diverse perspectives together, including those with lived experience, is a critical component to help address complex issues facing this field.” French has decades of experience working in the neurotech and patient advocacy fields. Living with a spinal cord injury, she also uses an implanted neurotech device for daily functions. “This ecosystem of neuroscience is on the cusp to collectively move the field forward by addressing access to the latest groundbreaking technology, in an equitable and ethical way. We can’t wait to engage and recruit the broader BCI community.”

Joe Lennerz, MD, PhD, emphasized, “Engaging in pre-competitive initiatives offers an often-overlooked avenue to drive meaningful progress. The collaboration of numerous thought leaders plays a pivotal role, with a crucial emphasis on regulatory engagement to unlock benefits for patients.”

The iBCI-CC is supported by key stakeholders within the Mass General Brigham system. Merit Cudkowicz, MD, MSc, chair of the Neurology Department, director of the Sean M. Healey and AMG Center for ALS at Massachusetts General Hospital, and Julianne Dorn Professor of Neurology at Harvard Medical School, said, “There is tremendous excitement in the ALS [amyotrophic lateral sclerosis, or Lou Gehrig’s disease] community for new devices that could ease and improve the ability of people with advanced ALS to communicate with their family, friends, and care partners. This important collaborative community will help to speed the development of a new class of neurologic devices to help our patients.”

Bailey McGuire, program manager of strategy and operations at Mass General Brigham’s Data Science Office, said, “We are thrilled to convene the iBCI-CC at Mass General Brigham’s DSO. By providing an administrative infrastructure, we want to help the iBCI-CC advance regulatory science and accelerate the availability of iBCI solutions that incorporate novel hardware and software that can benefit individuals with neurological conditions. We’re excited to help in this incredible space.”

For more information about the iBCI-CC, please visit https://www.ibci-cc.org/.

About Mass General Brigham

Mass General Brigham is an integrated academic health care system, uniting great minds to solve the hardest problems in medicine for our communities and the world. Mass General Brigham connects a full continuum of care across a system of academic medical centers, community and specialty hospitals, a health insurance plan, physician networks, community health centers, home care, and long-term care services. Mass General Brigham is a nonprofit organization committed to patient care, research, teaching, and service to the community. In addition, Mass General Brigham is one of the nation’s leading biomedical research organizations with several Harvard Medical School teaching hospitals. For more information, please visit massgeneralbrigham.org.

About the iBCI-CC Organizers:

Leigh Hochberg, MD, PhD is a neurointensivist at Massachusetts General Hospital’s Department of Neurology, where he directs the MGH Center for Neurotechnology and Neurorecovery. He is also the IDE Sponsor-Investigator and Directorof the BrainGate clinical trials, conducted by a consortium of scientists and clinicians at Brown, Emory, MGH, VA Providence, Stanford, and UC-Davis; the L. Herbert Ballou University Professor of Engineering and Professor of Brain Science at Brown University; Senior Lecturer on Neurology at Harvard Medical School; and Associate Director, VA RR&D Center for Neurorestoration and Neurotechnology in Providence.

Jennifer French, MBA, is the Executive Director of Neurotech Network, a nonprofit organization that focuses on education and advocacy of neurotechnologies. She serves on several Boards including the IEEE Neuroethics Initiative, Institute of Neuroethics, OpenMind platform, BRAIN Initiative Multi-Council and Neuroethics Working Groups, and the American Brain Coalition. She is the author of On My Feet Again (Neurotech Press, 2013) and is co-author of Bionic Pioneers (Neurotech Press, 2014). French lives with tetraplegia due to a spinal cord injury. She is an early user of an experimental implanted neural prosthesis for paralysis and is the Past-President and Founding member of the North American SCI Consortium.

Joe Lennerz, MD PhD, serves as the Chief Scientific Officer at BostonGene, an AI analytics and genomics startup based in Boston. Dr. Lennerz obtained a PhD in neurosciences, specializing in electrophysiology. He works on biomarker development and migraine research. Additionally, he is the co-founder and leader of the Pathology Innovation Collaborative Community, a regulatory science initiative focusing on diagnostics and software as a medical device (SaMD), convened by the Medical Device Innovation Consortium. He also serves as the co-chair of the federal Clinical Laboratory Fee Schedule (CLFS) advisory panel to the Centers for Medicare & Medicaid Services (CMS).

it’s been a while since I’ve come across BrainGate (see Leigh Hochberg bio in the above news release), which was last mentioned here in an April 2, 2021 posting, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI).”

Here are two of my more recent postings about brain-computer interfaces,

This next one is an older posting but perhaps the most relevant to the announcement of this collaborative community’s purpose,

There’s a lot more on brain-computer interfaces (BCI) here, just use the term in the blog search engine.

Digi, Nano, Bio, Neuro – why should we care more about converging technologies?

Personality in focus: the convergence of biology and computer technology could make extremely sensitive data available. (Image: by-​studio / AdobeStock) [downloaded from https://ethz.ch/en/news-and-events/eth-news/news/2024/05/digi-nano-bio-neuro-or-why-we-should-care-more-about-converging-technologies.html]

I gave a guest lecture some years ago where I mentioned that I thought the real issue with big data and AI (artificial intelligence) lay in combining them (or convergence). These days, it seems I was insufficiently imaginative as researchers from ETH Zurich have taken the notion much further.

From a May 7, 2024 ETH Zurich press release (also on EurekAlert), Note: You’ll see in the ‘References’ some extra words, ‘external page’ is self-explanatory but ‘call made’ remains a mystery to me,

In my research, I [Dirk Helbing, Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences and associated with the Department of Computer Science at ETH Zurich.] deal with the consequences of digitalisation for people, society and democracy. In this context, it is also important to keep an eye on their convergence in computer and life sciences – i.e. what becomes possible when digital technologies grow increasingly together with biotechnology, neurotechnology and nanotechnology.

Converging technologies are seen as a breeding ground for far-​reaching innovations. However, they are blurring the boundaries between the physical, biological and digital worlds. Conventional regulations are becoming ineffective as a result.

In a joint study I conducted with my co-​author Marcello Ienca, we have recently examined the risks and societal challenges of technological convergence – and concluded that the effects for individuals and society are far-​reaching.

We would like to draw attention to the challenges and risks of converging technologies and explain why we consider it necessary to accompany technological developments internationally with strict regulations.

For several years now, everyone has been able to observe, within the context of digitalisation, the consequences of leaving technological change to market forces alone without effective regulation.

Misinformation and manipulation on the web

The Digital Manifesto was published in 2015 – almost ten years ago.1 Nine European experts, including one from ETH Zurich, issued an urgent warning against scoring, i.e. the evaluation of people, and big nudging,2 a subtle form of digital manipulation. The latter is based on personality profiles created using cookies and other surveillance data. A little later, the Cambridge Analytica scandal alerted the world to how the data analysis company had been using personalised ads (microtargeting) in an attempt to manipulate voting behaviour in democratic elections.

This has brought democracies around the world under considerable pressure. Propaganda, fake news and hate speech are polarising and sowing doubt, while privacy is on the decline. We are in the midst of an international information war for control of our minds, in which advertising companies, tech corporations, secret services and the military are fighting to exert an influence on our mindset and behaviour. The European Union has adopted the AI Act in an attempt to curb these dangers.

However, digital technologies have developed at a breathtaking pace, and new possibilities for manipulation are already emerging. The merging of digital and nanotechnology with modern biotechnology and neurotechnology makes revolutionary applications possible that had been hardly imaginable before.

Microrobots for precision medicine

In personalised medicine, for example, the advancing miniaturisation of electronics is making it increasingly possible to connect living organisms and humans with networked sensors and computing power. The WEF [World Economic Forum] proclaimed the “Internet of Bodies” as early as 2020.3, 4

One example that combines conventional medication with a monitoring function is digital pills. These could control medication and record a patient’s physiological data (see this blog post).

Experts expect sensor technology to reach the nanoscale. Magnetic nanoparticles or nanoelectronic components, i.e. tiny particles invisible to the naked eye with a diameter up to 100 nanometres, would make it possible to transport active substances, interact with cells and record vast amounts of data on bodily functions. If introduced into the body, it is hoped that diseases could be detected at an early stage and treated in a personalised manner. This is often referred to as high-​precision medicine.

Nano-​electrodes record brain function

Miniaturised electrodes that can simultaneously measure and manipulate the activity of thousands of neurons coupled with ever-​improving AI tools for the analysis of brain signals are approaches that are now leading to much-​discussed advances in the brain-​computer interface. Brain activity mapping is also on the agenda. Thanks to nano-​neurotechnology, we could soon envisage smartphones and other AI applications being controlled directly by thoughts.

“Long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.” Dirk Helbling

Large-​scale projects to map the human brain are also likely to benefit from this.5 In future, brain activity mapping will not only be able to read our thoughts and feelings but also make them possible of being influenced remotely – the latter would probably be a lot more effective than previous manipulation methods like big nudging.

However, conventional electrodes are not suitable for permanent connection between cells and electronics – this requires durable and biocompatible interfaces. This has given rise to the suggestion of transmitting signals optogenetically, i.e. to control genes in special cells with light pulses.6 This would make the implementation of amazing circuits possible (see this ETH News article [November 11, 2014 press release] “Controlling genes with thoughts” ).

The downside of convergence

Admittedly, the applications mentioned above may sound futuristic, with most of them still visions or in their early stages of development. However, a lot of research is being conducted worldwide and at full speed. The military is also interested in using converging technologies for its own purposes. 7, 8

The downside of convergence is the considerable risks involved, such as state or private players gaining access to highly sensitive data and misusing it to monitor and influence people. The more connected our bodies become, the more vulnerable we will be to cybercrime and hacking. It cannot be ruled out that military applications exist already.5 One thing is clear, however: long before precision medicine and neurotechnology work reliably, these technologies will be able to be used against people.

“We need to regain control of our personal data. To do this, we need genuine informational self-​determination.” Dirk Helbling

The problem is that existing regulations are specific and insufficient to keep technological convergence in check. But how are we to retain control over our lives if it becomes increasingly possible to influence our thoughts, feelings and decisions by digital means?

Converging global regulation is needed

In our recent paper we conclude that any regulation of converging technologies would have to be based on converging international regulations. Accordingly, we outline a new global regulatory framework and propose ten governance principles to close the looming regulatory gap. 9

The framework emphasises the need for safeguards to protect bodily and mental functions from unauthorised interference and to ensure personal integrity and privacy by, for example. establishing neurorights.

To minimise risks and prevent abuse, future regulations should be inclusive, transparent and trustworthy. The principle of participatory governance is key, which would have to involve all the relevant groups and ensure that the concerns of affected minorities are also taken into account in decision-​making processes.

Finally, we need to regain control of our personal data. To accomplish this, we need genuine informational self-​determination. This would also have to apply to the digital twins of our body and personality, because they can be used to hack our health and our way of thinking – for good or for bad.10

With our contribution, we would like to initiate public debate about converging technologies. Despite its major relevance, we believe that too little attention is being paid to this topic. Continuous discourse on benefits, risks and sensible rules can help to steer technological convergence in such a way that it serves people instead of harming them.

Dirk Helbing wrote this article together with external page Marcello Ienca call_made, who previously worked at ETH Zurich and EPFL and is now Assistant Professor of Ethics of AI and Neuroscience at the Technical University of Munich.

References

1 Digital-​Manifest: external page Digitale Demokratie statt Datendiktatur call_made (2015) Spektrum der Wissenschaft

2 external page Sie sind das Ziel! call_made (2024) Schweizer Monat

3 external page The Internet of Bodies Is Here: Tackling new challenges of technology governance call_made (2020) World Economic Forum

4 external page Tracking how our bodies work could change our lives call_made (2020) World Economic Forum

5 external page Nanotools for Neuroscience and Brain Activity Mapping call_made (2013) ACS Nano

6 external page Innovationspotenziale der Mensch-​Maschine-Interaktion call_made (2016) Deutsche Akademie der Technikwissenschaften

7 external page Human Augmentation – The Dawn of a New Paradigm. A strategic implications project call_made (2021) UK Ministry of Defence

8 external page Behavioural change as the core of warfighting call_made (2017) Militaire Spectator

9 Helbing D, Ienca M: external page Why converging technologies need converging international regulation call_made (2024) Ethics and Information Technology

10 external page Who is Messing with Your Digital Twin? Body, Mind, and Soul for Sale? call_made Dirk Helbing TEDx Talk (2023)

Here’s a second link to and citation for the paper,

Why converging technologies need converging international regulation by Dirk Helbing & Marcello Ienca. Ethics and Information Technology Volume 26, article number 15, (2024) DOI: 10.1007/s10676-024-09756-8 Published: 28 February 2024

This paper is open access.

Bioinspired, biomimetic stimulation for the next generation of neuroprosthetics

ETH researchers have developed a prosthetic leg that communicates with the brain via natural signals. (Photograph: Keystone) Courtesy: ETH Zurich

A February 21, 2024 ETH Zurich press release by Ori Schipper (also on EurekAlert) announces a ‘nature-inspired’ or bioinspired approach to neuroprosthetics,

Prostheses that connect to the nervous system have been available for several years. Now, researchers at ETH Zurich have found evidence that neuroprosthetics work better when they use signals that are inspired by nature.

In brief

*Neuroprostheses are electro-​mechanical devices that are connected to the nervous system. As yet, these are unable to provide natural communication with the brain. Instead, they often evoke artificial, unpleasant sensations, similar to a feeling of tingles over the skin.
*This paraesthesia might be caused by overstimulation of the nervous system. ETH Zurich researchers together with colleagues in Germany, Serbia and Russia have proposed that neuroprosthetics should transmit biomimetic signals that are easier for the brain to understand.
*These new findings are relevant to arm and leg prostheses as well as various other aids and devices, including spinal implants and electrodes for brain stimulation. 

A few years ago, a team of researchers working under Professor Stanisa Raspopovic at the ETH Zurich Neuroengineering Lab gained worldwide attention when they announced that their prosthetic legs had enabled amputees to feel sensations from this artificial body part for the first time. Unlike commercial leg prostheses, which simply provide amputees with stability and support, the ETH researchers’ prosthetic device was connected to the sciatic nerve in the test subjects’ thigh via implanted electrodes.

This electrical connection enabled the neuroprosthesis to communicate with the patient’s brain, for example relaying information on the constant changes in pressure detected on the sole of the prosthetic foot when walking. This gave the test subjects greater confidence in their prosthesis – and it enabled them to walk considerably faster on challenging terrains. “Our experimental leg prosthesis succeeded in evoking natural sensations. That’s something current neuroprostheses are mainly unable to do; instead, they mostly evoke artificial, unpleasant sensations,” Raspopovic says.

This is probably because today’s neuroprosthetics are using time-​constant electrical pulses to stimulate the nervous system. “That’s not only unnatural, but also inefficient,” Raspopovic says. In a recently published paper, he and his team used the example of their leg prostheses to highlight the benefits of using naturally inspired, biomimetic stimulation to develop the next generation of neuroprosthetics.

Model simulates activation of nerves in the sole

To generate these biomimetic signals, Natalija Katic – a doctoral student in Raspopovic’s research group – developed a computer model called FootSim. It is based on data collected by collaborators in Canada, who recorded the activity of natural receptors, named mechanoreceptors, in the sole of the foot while touching different points on the feet of volunteers with a vibrating rod.

The model simulates the dynamic behaviour of large numbers of mechanoreceptors in the sole of the foot and generates the neural signals that shoot up the nerves in the leg towards the brain – from the moment the heel strikes the ground and the weight of the body starts to shift forward to the outside of the foot until the toes push off the ground ready for the next step. “Thanks to this model, we can see how semsory receptors from the sole, and the connected nerves, behave during walking or running, which is experimentally impossible to measure” Katic says.

Information overload in the spinal cord

To assess how closely the biomimetic signals calculated by the model correspond to the signals emitted by real neurons, Giacomo Valle – a postdoc in Raspopovic’s research group – worked with colleagues in Germany, Serbia and Russia on experiments with cats, whose nervous system processes movement in a similar way to that of humans. The experiments took place in 2019 at the Pavlov Institute of Physiology in St. Petersburg and were carried out in accordance with the relevant European Union guidelines.

The researchers implanted electrodes, connecting some to the nerve in the leg and some to the spinal cord to discover how the signals are transmitted through the nervous system. When the researchers applied pressure to the bottom of the cat’s paw, thereby evoking the natural neural response that occurs when a cat takes a step, the peculiar pattern of activity recorded in the spinal cord did indeed resemble the patterns that were elicited in the spinal cord when the researchers stimulated the leg nerve with biomimetic signals.

By contrast, the conventional approach of time-​constant stimulation of the sciatic nerve in the cat’s thigh elicited a markedly different pattern of activation in the spinal cord. “This clearly shows that the commonly used stimulation methods cause the neural networks in the spine to be flooded with information,” Valle says. “This information overload could be the reason for the unpleasant sensations or paraesthesia reported by some users of neuroprosthetics,” Raspopovic adds.

Learning the language of the nervous system

In their clinical trial with leg amputees, the researchers were able to show that biomimetic stimulation is superior to time-​constant stimulation. Their work clearly demonstrated how the signals that mimicked nature produced better results: not only were the test subjects able to climb steps faster, they also made fewer mistakes in a task that required them to climb the same steps while spelling words backwards. “Biomimetic neurostimulation allows subjects to concentrate on other things while walking,” Raspopovic says, “so we concluded that this type of stimulation is more naturally processed and less taxing on the brain.”

Raspopovic, whose lab forms part of the ETH Institute of Robotics and Intelligent Systems, believes that these new findings are not only relevant to the limb prostheses he and his team have been working on for over half a decade. He argues that the need to move away from unnatural, time-​constant stimulation towards biomimetic signals also applies to a whole series of other aids and devices, including spinal implants and electrodes for brain stimulation. “We need to learn the language of the nervous system,” Raspopovic says. “Then we’ll be able to communicate with the brain in ways it really understands.”

Here’s a link to and a citation for the paper,

Biomimetic computer-to-brain communication enhancing naturalistic touch sensations via peripheral nerve stimulation by Giacomo Valle, Natalija Katic Secerovic, Dominic Eggemann, Oleg Gorskii, Natalia Pavlova, Francesco M. Petrini, Paul Cvancara, Thomas Stieglitz, Pavel Musienko, Marko Bumbasirevic & Stanisa Raspopovic. Nature Communications volume 15, Article number: 1151 (2024) DOI: https://doi.org/10.1038/s41467-024-45190-6 Published: 20 February 2024

This paper is open access.

It was a bit of a surprise to see mention of some Canadian collaborators with regard to the earlier work featuring FootSim, a computer model Here’s a link to and a citation to that paper, this version is housed at ETH Zurich,

Modeling foot sole cutaneous afferents: FootSim by Natalija Katic, Rodrigo Kazu Siqueira, Luke Cleland, Nicholas Strzalkowski, Leah Bent, Stanisa Raspopovic, and Hannes Saal. Originally published in: iScience 26(1), DOI https://doi.org/10.1016/j.isci.2022.105874 Publication date: 2023-01-20 Permanent link: https://doi.org/10.3929/ethz-b-000591102

This paper too is open access.

Neuromodulation-Curious? May 11, 2024 free event in Vancouver (Canada) hosted by Canadian Neuromodulation Society and the International Neuromodulation Society (INS)

Before leaping into the event details, I’ve got some information about neuromodulation for anyone who’s not familiar with the term, there are two bits (not mutually exclusive). First, there’s this Wikipedia Neuromodulation essay, which focuses on the physiological process of neuromodulation. Second, there are the answers to Frequently Asked Questions (FAQs), specifically, What is neuromodulation? on the International Neuromodulation Society (INS) website, which pertain more closely to the information being offered at the upcoming event,

WHAT IS NEUROMODULATION?

Neuromodulation is technology that acts directly upon nerves. It is the alteration—or modulation—of nerve activity by delivering electrical or pharmaceutical agents directly to a target area.

Neuromodulation devices and treatments can be life changing. They affect every area of the body and treat nearly every disease or symptom from headaches to tremors to spinal cord damage to urinary incontinence. With such a broad therapeutic scope, and significant ongoing improvements in biotechnology, it is not surprising that neuromodulation is poised as a major growth industry for the next decade.

Most frequently, people think of neuromodulation in the context of chronic pain relief, the most common indication. However, there are a plethora of neuromodulation applications, such as deep brain stimulation (DBS) treatment for Parkinson’s disease, sacral nerve stimulation for pelvic disorders and incontinence, and spinal cord stimulation for ischemic disorders (angina, peripheral vascular disease).

In addition, neuromodulation devices can stimulate a response where there was previously none, as in the case of a cochlear implant restoring hearing in a deaf patient.

And for every existing neuromodulatory treatment, there are many more on the horizon. An emerging technology called BrainGate Neural Interface System has been used to analyze brain signals and translate those signals into cursor movements, allowing severely motor-impaired individuals an alternate “pathway” to control a computer with thought, and offers potential for one day restoring some degree of limb movement.

This April 9, 2024 International Neuromodulation Society (INS) news release on EurekAlert announces the May 11, 2024 free public event in Vancouver (Canada),

The Canadian Neuromodulation Society and the International Neuromodulation Society (INS) are delighted to announce a public education event, “Understanding Neuromodulation of the Brain and Spinal Cord”. 

This complimentary event is scheduled to take place at the Vancouver Convention Centre, East Building, on Saturday, May 11, from 13:30 to 18:00, during the 16th INS World Congress.

Aimed at patients, their families, and friends dealing with conditions such as chronic pain, Parkinson’s disease, and tremor, this event is also open to interested members of the public, media representatives, and professionals. 

This gathering comprises several lectures that pair scientifically and clinically substantiated insights with firsthand, real-world experiences. It provides a unique opportunity to learn directly from both local and international medical experts and patients about neuromodulation therapies. Neuromodulation treatments involve “altering nerve activity through the targeted delivery of electrical stimulation or chemical agents to specific neurological sites in the body” (Source: INS).

This event will be moderated by Dr. Christopher Honey, MD, DPhil, FRCPC, FACS, Professor & Head, Division of Neurosurgery at the University of British Columbia, as well as esteemed leader, clinician, author and INS Congress Chair.

“I am both delighted and honoured to chair this meeting. We have brought the world’s experts in neuromodulation and more than a thousand clinicians to Vancouver for the scientific meeting. The public lectures will provide background information on neuromodulation and allow our patients to give a first-hand review of their experience with the technology.” 

Event Highlights:

* Educational Sessions: A series of talks covering various aspects of neuromodulation, including its application for Parkinson’s Disease, tremor, dystonia, back & leg pain, neuropathic pain (CRPS and post-surgical), and angina and peripheral vascular disease.

* Patient Experiences: Hear firsthand accounts from patients who have benefited from neuromodulation therapies, providing insights into their journeys and outcomes.

* Interactive Q&A: Dedicated Q&A sessions will allow attendees to engage with experts, ask questions, and deepen their understanding of neuromodulation and its risks and benefits.

* Networking: Opportunities for attendees to connect with healthcare professionals, researchers and others interested in neuromodulation.

This event is particularly significant as it precedes the INS 16th World Congress on Neuromodulation, highlighting the importance of public education alongside scientific discourse. It underlines the commitment of both the Canadian Neuromodulation Society and INS to raising awareness about therapies that can significantly improve the quality of life for individuals with chronic conditions.

Registration Information:

Attendance is free of charge, but registration is required. Interested participants are encouraged to register early to secure their place at this informative session.

About the International Neuromodulation Society:

The International Neuromodulation Society (INS) is a global non-profit organization focused on the scientific development and awareness of neuromodulation. The INS is dedicated to promoting improved patient care through education, research, and advocacy in the field of neuromodulation. The Canadian Neuromodulation Society has been an established chapter of the INS since 2006. [You can find the Canadian Neuromodulation Society website here.]

Good luck getting a seat!

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

Neural (brain) implants and hype (long read)

There was a big splash a few weeks ago when it was announced that Neuralink’s (Elon Musk company) brain implant had been surgically inserted into its first human patient.

Getting approval

David Tuffley, senior lecturer in Applied Ethics & CyberSecurity at Griffith University (Australia), provides a good overview of the road Neuralink took to getting FDA (US Food and Drug Administration) approval for human clinical trials in his May 29, 2023 essay for The Conversation, Note: Links have been removed,

Since its founding in 2016, Elon Musk’s neurotechnology company Neuralink has had the ambitious mission to build a next-generation brain implant with at least 100 times more brain connections than devices currently approved by the US Food and Drug Administration (FDA).

The company has now reached a significant milestone, having received FDA approval to begin human trials. So what were the issues keeping the technology in the pre-clinical trial phase for as long as it was? And have these concerns been addressed?

Neuralink is making a Class III medical device known as a brain-computer interface (BCI). The device connects the brain to an external computer via a Bluetooth signal, enabling continuous communication back and forth.

The device itself is a coin-sized unit called a Link. It’s implanted within a small disk-shaped cutout in the skull using a precision surgical robot. The robot splices a thousand tiny threads from the Link to certain neurons in the brain. [emphasis mine] Each thread is about a quarter the diameter of a human hair.

The company says the device could enable precise control of prosthetic limbs, giving amputees natural motor skills. It could revolutionise treatment for conditions such as Parkinson’s disease, epilepsy and spinal cord injuries. It also shows some promise for potential treatment of obesity, autism, depression, schizophrenia and tinnitus.

Several other neurotechnology companies and researchers have already developed BCI technologies that have helped people with limited mobility regain movement and complete daily tasks.

In February 2021, Musk said Neuralink was working with the FDA to secure permission to start initial human trials later that year. But human trials didn’t commence in 2021.

Then, in March 2022, Neuralink made a further application to the FDA to establish its readiness to begin humans trials.

One year and three months later, on May 25 2023, Neuralink finally received FDA approval for its first human clinical trial. Given how hard Neuralink has pushed for permission to begin, we can assume it will begin very soon. [emphasis mine]

The approval has come less than six months after the US Office of the Inspector General launched an investigation into Neuralink over potential animal welfare violations. [emphasis mine]

In accessible language, Tuffley goes on to discuss the FDA’s specific technical issues with implants and how they were addressed in his May 29, 2023 essay.

More about how Neuralink’s implant works and some concerns

Canadian Broadcasting Corporation (CBC) journalist Andrew Chang offers an almost 13 minute video, “Neuralink brain chip’s first human patient. How does it work?” Chang is a little overenthused for my taste but he offers some good information about neural implants, along with informative graphics in his presentation.

So, Tuffley was right about Neuralink getting ready quickly for human clinical trials as you can guess from the title of Chang’s CBC video.

Jennifer Korn announced that recruitment had started in her September 20, 2023 article for CNN (Cable News Network), Note: Links have been removed,

Elon Musk’s controversial biotechnology startup Neuralink opened up recruitment for its first human clinical trial Tuesday, according to a company blog.

After receiving approval from an independent review board, Neuralink is set to begin offering brain implants to paralysis patients as part of the PRIME Study, the company said. PRIME, short for Precise Robotically Implanted Brain-Computer Interface, is being carried out to evaluate both the safety and functionality of the implant.

Trial patients will have a chip surgically placed in the part of the brain that controls the intention to move. The chip, installed by a robot, will then record and send brain signals to an app, with the initial goal being “to grant people the ability to control a computer cursor or keyboard using their thoughts alone,” the company wrote.

Those with quadriplegia [sometimes known as tetraplegia] due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) may qualify for the six-year-long study – 18 months of at-home and clinic visits followed by follow-up visits over five years. Interested people can sign up in the patient registry on Neuralink’s website.

Musk has been working on Neuralink’s goal of using implants to connect the human brain to a computer for five years, but the company so far has only tested on animals. The company also faced scrutiny after a monkey died in project testing in 2022 as part of efforts to get the animal to play Pong, one of the first video games.

I mentioned three Reuters investigative journalists who were reporting on Neuralink’s animal abuse allegations (emphasized in Tuffley’s essay) in a July 7, 2023 posting, “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” Later that year, Neuralink was cleared by the US Department of Agriculture (see September 24,, 2023 article by Mahnoor Jehangir for BNN Breaking).

Plus, Neuralink was being investigated over more allegations according to a February 9, 2023 article by Rachel Levy for Reuters, this time regarding hazardous pathogens,

The U.S. Department of Transportation said on Thursday it is investigating Elon Musk’s brain-implant company Neuralink over the potentially illegal movement of hazardous pathogens.

A Department of Transportation spokesperson told Reuters about the probe after the Physicians Committee of Responsible Medicine (PCRM), an animal-welfare advocacy group,wrote to Secretary of Transportation Pete Buttigieg, opens new tab earlier on Thursday to alert it of records it obtained on the matter.

PCRM said it obtained emails and other documents that suggest unsafe packaging and movement of implants removed from the brains of monkeys. These implants may have carried infectious diseases in violation of federal law, PCRM said.

There’s an update about the hazardous materials in the next section. Spoiler alert, the company got fined.

Neuralink’s first human implant

A January 30, 2024 article (Associated Press with files from Reuters) on the Canadian Broadcasting Corporation’s (CBC) online news webspace heralded the latest about Neurlink’s human clinical trials,

The first human patient received an implant from Elon Musk’s computer-brain interface company Neuralink over the weekend, the billionaire says.

In a post Monday [January 29, 2024] on X, the platform formerly known as Twitter, Musk said that the patient received the implant the day prior and was “recovering well.” He added that “initial results show promising neuron spike detection.”

Spikes are activity by neurons, which the National Institutes of Health describe as cells that use electrical and chemical signals to send information around the brain and to the body.

The billionaire, who owns X and co-founded Neuralink, did not provide additional details about the patient.

When Neuralink announced in September [2023] that it would begin recruiting people, the company said it was searching for individuals with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis, commonly known as ALS or Lou Gehrig’s disease.

Neuralink reposted Musk’s Monday [January 29, 2024] post on X, but did not publish any additional statements acknowledging the human implant. The company did not immediately respond to requests for comment from The Associated Press or Reuters on Tuesday [January 30, 2024].

In a separate Monday [January 29, 2024] post on X, Musk said that the first Neuralink product is called “Telepathy” — which, he said, will enable users to control their phones or computers “just by thinking.” He said initial users would be those who have lost use of their limbs.

The startup’s PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot.

Now for the hazardous materials, January 30, 2024 article, Note: A link has been removed,

Earlier this month [January 2024], a Reuters investigation found that Neuralink was fined for violating U.S. Department of Transportation (DOT) rules regarding the movement of hazardous materials. During inspections of the company’s facilities in Texas and California in February 2023, DOT investigators found the company had failed to register itself as a transporter of hazardous material.

They also found improper packaging of hazardous waste, including the flammable liquid Xylene. Xylene can cause headaches, dizziness, confusion, loss of muscle co-ordination and even death, according to the U.S. Centers for Disease Control and Prevention.

The records do not say why Neuralink would need to transport hazardous materials or whether any harm resulted from the violations.

Skeptical thoughts about Elon Musk and Neuralink

Earlier this month (February 2024), the British Broadcasting Corporation (BBC) published an article by health reporters, Jim Reed and Joe McFadden, that highlights the history of brain implants, the possibilities, and notes some of Elon Musk’s more outrageous claims for Neuralink’s brain implants,

Elon Musk is no stranger to bold claims – from his plans to colonise Mars to his dreams of building transport links underneath our biggest cities. This week the world’s richest man said his Neuralink division had successfully implanted its first wireless brain chip into a human.

Is he right when he says this technology could – in the long term – save the human race itself?

Sticking electrodes into brain tissue is really nothing new.

In the 1960s and 70s electrical stimulation was used to trigger or suppress aggressive behaviour in cats. By the early 2000s monkeys were being trained to move a cursor around a computer screen using just their thoughts.

“It’s nothing novel, but implantable technology takes a long time to mature, and reach a stage where companies have all the pieces of the puzzle, and can really start to put them together,” says Anne Vanhoestenberghe, professor of active implantable medical devices, at King’s College London.

Neuralink is one of a growing number of companies and university departments attempting to refine and ultimately commercialise this technology. The focus, at least to start with, is on paralysis and the treatment of complex neurological conditions.

Reed and McFadden’s February 2024 BBC article describes a few of the other brain implant efforts, Note: Links have been removed,

One of its [Neuralink’s] main rivals, a start-up called Synchron backed by funding from investment firms controlled by Bill Gates and Jeff Bezos, has already implanted its stent-like device into 10 patients.

Back in December 2021, Philip O’Keefe, a 62-year old Australian who lives with a form of motor neurone disease, composed the first tweet using just his thoughts to control a cursor.

And researchers at Lausanne University in Switzerland have shown it is possible for a paralysed man to walk again by implanting multiple devices to bypass damage caused by a cycling accident.

In a research paper published this year, they demonstrated a signal could be beamed down from a device in his brain to a second device implanted at the base of his spine, which could then trigger his limbs to move.

Some people living with spinal injuries are sceptical about the sudden interest in this new kind of technology.

“These breakthroughs get announced time and time again and don’t seem to be getting any further along,” says Glyn Hayes, who was paralysed in a motorbike accident in 2017, and now runs public affairs for the Spinal Injuries Association.

If I could have anything back, it wouldn’t be the ability to walk. It would be putting more money into a way of removing nerve pain, for example, or ways to improve bowel, bladder and sexual function.” [emphasis mine]

Musk, however, is focused on something far more grand for Neuralink implants, from Reed and McFadden’s February 2024 BBC article, Note: A link has been removed,

But for Elon Musk, “solving” brain and spinal injuries is just the first step for Neuralink.

The longer-term goal is “human/AI symbiosis” [emphasis mine], something he describes as “species-level important”.

Musk himself has already talked about a future where his device could allow people to communicate with a phone or computer “faster than a speed typist or auctioneer”.

In the past, he has even said saving and replaying memories may be possible, although he recognised “this is sounding increasingly like a Black Mirror episode.”

One of the experts quoted in Reed and McFadden’s February 2024 BBC article asks a pointed question,

… “At the moment, I’m struggling to see an application that a consumer would benefit from, where they would take the risk of invasive surgery,” says Prof Vanhoestenberghe.

“You’ve got to ask yourself, would you risk brain surgery just to be able to order a pizza on your phone?”

Rae Hodge’s February 11, 2024 article about Elon Musk and his hyped up Neuralink implant for Salon is worth reading in its entirety but for those who don’t have the time or need a little persuading, here are a few excerpts, Note 1: This is a warning; Hodge provides more detail about the animal cruelty allegations; Note 2: Links have been removed,

Elon Musk’s controversial brain-computer interface (BCI) tech, Neuralink, has supposedly been implanted in its first recipient — and as much as I want to see progress for treatment of paralysis and neurodegenerative disease, I’m not celebrating. I bet the neuroscientists he reportedly drove out of the company aren’t either, especially not after seeing the gruesome torture of test monkeys and apparent cover-up that paved the way for this moment. 

All of which is an ethics horror show on its own. But the timing of Musk’s overhyped implant announcement gives it an additional insulting subtext. Football players are currently in a battle for their lives against concussion-based brain diseases that plague autopsy reports of former NFL players. And Musk’s boast of false hope came just two weeks before living players take the field in the biggest and most brutal game of the year. [2024 Super Bowl LVIII]

ESPN’s Kevin Seifert reports neuro-damage is up this year as “players suffered a total of 52 concussions from the start of training camp to the beginning of the regular season. The combined total of 213 preseason and regular season concussions was 14% higher than 2021 but within range of the three-year average from 2018 to 2020 (203).”

I’m a big fan of body-tech: pacemakers, 3D-printed hips and prosthetic limbs that allow you to wear your wedding ring again after 17 years. Same for brain chips. But BCI is the slow-moving front of body-tech development for good reason. The brain is too understudied. Consequences of the wrong move are dire. Overpromising marketable results on profit-driven timelines — on the backs of such a small community of researchers in a relatively new field — would be either idiotic or fiendish. 

Brown University’s research in the sector goes back to the 1990s. Since the emergence of a floodgate-opening 2002 study and the first implant in 2004 by med-tech company BrainGate, more promising results have inspired broader investment into careful research. But BrainGate’s clinical trials started back in 2009, and as noted by Business Insider’s Hilary Brueck, are expected to continue until 2038 — with only 15 participants who have devices installed. 

Anne Vanhoestenberghe is a professor of active implantable medical devices at King’s College London. In a recent release, she cautioned against the kind of hype peddled by Musk.

“Whilst there are a few other companies already using their devices in humans and the neuroscience community have made remarkable achievements with those devices, the potential benefits are still significantly limited by technology,” she said. “Developing and validating core technology for long term use in humans takes time and we need more investments to ensure we do the work that will underpin the next generation of BCIs.” 

Neuralink is a metal coin in your head that connects to something as flimsy as an app. And we’ve seen how Elon treats those. We’ve also seen corporate goons steal a veteran’s prosthetic legs — and companies turn brain surgeons and dentists into repo-men by having them yank anti-epilepsy chips out of people’s skulls, and dentures out of their mouths. 

“I think we have a chance with Neuralink to restore full-body functionality to someone who has a spinal cord injury,” Musk said at a 2023 tech summit, adding that the chip could possibly “make up for whatever lost capacity somebody has.”

Maybe BCI can. But only in the careful hands of scientists who don’t have Musk squawking “go faster!” over their shoulders. His greedy frustration with the speed of BCI science is telling, as is the animal cruelty it reportedly prompted.

There have been other examples of Musk’s grandiosity. Notably, David Lee expressed skepticism about hyperloop in his August 13, 2013 article for BBC news online

Is Elon Musk’s Hyperloop just a pipe dream?

Much like the pun in the headline, the bright idea of transporting people using some kind of vacuum-like tube is neither new nor imaginative.

There was Robert Goddard, considered the “father of modern rocket propulsion”, who claimed in 1909 that his vacuum system could suck passengers from Boston to New York at 1,200mph.

And then there were Soviet plans for an amphibious monorail  – mooted in 1934  – in which two long pods would start their journey attached to a metal track before flying off the end and slipping into the water like a two-fingered Kit Kat dropped into some tea.

So ever since inventor and entrepreneur Elon Musk hit the world’s media with his plans for the Hyperloop, a healthy dose of scepticism has been in the air.

“This is by no means a new idea,” says Rod Muttram, formerly of Bombardier Transportation and Railtrack.

“It has been previously suggested as a possible transatlantic transport system. The only novel feature I see is the proposal to put the tubes above existing roads.”

Here’s the latest I’ve found on hyperloop, from the Hyperloop Wikipedia entry,

As of 2024, some companies continued to pursue technology development under the hyperloop moniker, however, one of the biggest, well funded players, Hyperloop One, declared bankruptcy and ceased operations in 2023.[15]

Musk is impatient and impulsive as noted in a September 12, 2023 posting by Mike Masnick on Techdirt, Note: A link has been removed,

The Batshit Crazy Story Of The Day Elon Musk Decided To Personally Rip Servers Out Of A Sacramento Data Center

Back on Christmas Eve [December 24, 2022] of last year there were some reports that Elon Musk was in the process of shutting down Twitter’s Sacramento data center. In that article, a number of ex-Twitter employees were quoted about how much work it would be to do that cleanly, noting that there’s a ton of stuff hardcoded in Twitter code referring to that data center (hold that thought).

That same day, Elon tweeted out that he had “disconnected one of the more sensitive server racks.”

Masnick follows with a story of reckless behaviour from someone who should have known better.

Ethics of implants—where to look for more information

While Musk doesn’t use the term when he describes a “human/AI symbiosis” (presumably by way of a neural implant), he’s talking about a cyborg. Here’s a 2018 paper, which looks at some of the implications,

Do you want to be a cyborg? The moderating effect of ethics on neural implant acceptance by Eva Reinares-Lara, Cristina Olarte-Pascual, and Jorge Pelegrín-Borondo. Computers in Human Behavior Volume 85, August 2018, Pages 43-53 DOI: https://doi.org/10.1016/j.chb.2018.03.032

This paper is open access.

Getting back to Neuralink, I have two blog posts that discuss the company and the ethics of brain implants from way back in 2021.

First, there’s Jazzy Benes’ March 1, 2021 posting on the Santa Clara University’s Markkula Center for Applied Ethics blog. It stands out as it includes a discussion of the disabled community’s issues, Note: Links have been removed,

In the heart of Silicon Valley we are constantly enticed by the newest technological advances. With the big influencers Grimes [a Canadian musician and the mother of three children with Elon Musk] and Lil Uzi Vert publicly announcing their willingness to become experimental subjects for Elon Musk’s Neuralink brain implantation device, we are left wondering if future technology will actually give us “the knowledge of the Gods.” Is it part of the natural order for humans to become omniscient beings? Who will have access to the devices? What other ethical considerations must be discussed before releasing such technology to the public?

A significant issue that arises from developing technologies for the disabled community is the assumption that disabled persons desire the abilities of what some abled individuals may define as “normal.” Individuals with disabilities may object to technologies intended to make them fit an able-bodied norm. “Normal” is relative to each individual, and it could be potentially harmful to use a deficit view of disability, which means judging a disability as a deficiency. However, this is not to say that all disabled individuals will reject a technology that may enhance their abilities. Instead, I believe it is a consideration that must be recognized when developing technologies for the disabled community, and it can only be addressed through communication with disabled persons. As a result, I believe this is a conversation that must be had with the community for whom the technology is developed–disabled persons.

With technologies that aim to address disabilities, we walk a fine line between therapeutics and enhancement. Though not the first neural implant medical device, the Link may have been the first BCI system openly discussed for its potential transhumanism uses, such as “enhanced cognitive abilities, memory storage and retrieval, gaming, telepathy, and even symbiosis with machines.” …

Benes also discusses transhumanism, privacy issues, and consent issues. It’s a thoughtful reading experience.

Second is a July 9, 2021 posting by anonymous on the University of California at Berkeley School of Information blog which provides more insight into privacy and other issues associated with data collection (and introduced me to the concept of decisional interference),

As the development of microchips furthers and advances in neuroscience occur, the possibility for seamless brain-machine interfaces, where a device decodes inputs from the user’s brain to perform functions, becomes more of a reality. These various forms of these technologies already exist. However, technological advances have made implantable and portable devices possible. Imagine a future where humans don’t need to talk to each other, but rather can transmit their thoughts directly to another person. This idea is the eventual goal of Elon Musk, the founder of Neuralink. Currently, Neuralink is one of the main companies involved in the advancement of this type of technology. Analysis of the Neuralink’s technology and their overall mission statement provide an interesting insight into the future of this type of human-computer interface and the potential privacy and ethical concerns with this technology.

As this technology further develops, several privacy and ethical concerns come into question. To begin, using Solove’s Taxonomy as a privacy framework, many areas of potential harm are revealed. In the realm of information collection, there is much risk. Brain-computer interfaces, depending on where they are implanted, could have access to people’s most private thoughts and emotions. This information would need to be transmitted to another device for processing. The collection of this information by companies such as advertisers would represent a major breach of privacy. Additionally, there is risk to the user from information processing. These devices must work concurrently with other devices and often wirelessly. Given the widespread importance of cloud computing in much of today’s technology, offloading information from these devices to the cloud would be likely. Having the data stored in a database puts the user at the risk of secondary use if proper privacy policies are not implemented. The trove of information stored within the information collected from the brain is vast. These datasets could be combined with existing databases such as browsing history on Google to provide third parties with unimaginable context on individuals. Lastly, there is risk for information dissemination, more specifically, exposure. The information collected and processed by these devices would need to be stored digitally. Keeping such private information, even if anonymized, would be a huge potential for harm, as the contents of the information may in itself be re-identifiable to a specific individual. Lastly there is risk for invasions such as decisional interference. Brain-machine interfaces would not only be able to read information in the brain but also write information. This would allow the device to make potential emotional changes in its users, which be a major example of decisional interference. …

For the most recent Neuralink and brain implant ethics piece, there’s this February 14, 2024 essay on The Conversation, which, unusually, for this publication was solicited by the editors, Note: Links have been removed,

In January 2024, Musk announced that Neuralink implanted its first chip in a human subject’s brain. The Conversation reached out to two scholars at the University of Washington School of Medicine – Nancy Jecker, a bioethicst, and Andrew Ko, a neurosurgeon who implants brain chip devices – for their thoughts on the ethics of this new horizon in neuroscience.

Information about the implant, however, is scarce, aside from a brochure aimed at recruiting trial subjects. Neuralink did not register at ClinicalTrials.gov, as is customary, and required by some academic journals. [all emphases mine]

Some scientists are troubled by this lack of transparency. Sharing information about clinical trials is important because it helps other investigators learn about areas related to their research and can improve patient care. Academic journals can also be biased toward positive results, preventing researchers from learning from unsuccessful experiments.

Fellows at the Hastings Center, a bioethics think tank, have warned that Musk’s brand of “science by press release, while increasingly common, is not science. [emphases mine]” They advise against relying on someone with a huge financial stake in a research outcome to function as the sole source of information.

When scientific research is funded by government agencies or philanthropic groups, its aim is to promote the public good. Neuralink, on the other hand, embodies a private equity model [emphasis mine], which is becoming more common in science. Firms pooling funds from private investors to back science breakthroughs may strive to do good, but they also strive to maximize profits, which can conflict with patients’ best interests.

In 2022, the U.S. Department of Agriculture investigated animal cruelty at Neuralink, according to a Reuters report, after employees accused the company of rushing tests and botching procedures on test animals in a race for results. The agency’s inspection found no breaches, according to a letter from the USDA secretary to lawmakers, which Reuters reviewed. However, the secretary did note an “adverse surgical event” in 2019 that Neuralink had self-reported.

In a separate incident also reported by Reuters, the Department of Transportation fined Neuralink for violating rules about transporting hazardous materials, including a flammable liquid.

…the possibility that the device could be increasingly shown to be helpful for people with disabilities, but become unavailable due to loss of research funding. For patients whose access to a device is tied to a research study, the prospect of losing access after the study ends can be devastating. [emphasis mine] This raises thorny questions about whether it is ever ethical to provide early access to breakthrough medical interventions prior to their receiving full FDA approval.

Not registering a clinical trial would seem to suggest there won’t be much oversight. As for Musk’s “science by press release” activities, I hope those will be treated with more skepticism by mainstream media although that seems unlikely given the current situation with journalism (more about that in a future post).

As for the issues associated with private equity models for science research and the problem of losing access to devices after a clinical trial is ended, my April 5, 2022 posting, “Going blind when your neural implant company flirts with bankruptcy (long read)” offers some cautionary tales, in addition to being the most comprehensive piece I’ve published on ethics and brain implants.

My July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” offers a brief overview of the international scene.

Communicating thoughts by means of brain implants?

The Australian military announced mind-controlled robots in Spring 2023 (see my June 13, 2023 posting) and, recently, scientists at Duke University (North Carolina, US) have announced research that may allow people who are unable to speak to communicate their thoughts, from a November 6, 2023 news item on ScienceDaily,

A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.

Appearing Nov. 6 [2023] in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.

One more plastic brain for this blog,

Caption: A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say. Credit: Dan Vahaba/Duke University

A November 6, 2023 Duke University news release (also on EurekAlert), which originated the news item, provides more detail, Note: Links have been removed,

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word “gak,”  the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.

Duraivel and his mentors are excited about making a cordless version of the device with a recent $2.4M grant from the National Institutes of Health.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon.

“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke Magazine piece about the technology, “but you can see the trajectory where you might be able to get there.”

Here’s a link to and a citation for the paper,

High-resolution neural recordings improve the accuracy of speech decoding by Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi & Gregory B. Cogan. Nature Communications volume 14, Article number: 6938 (2023) DO: Ihttps://doi.org/10.1038/s41467-023-42555-1 Published: 06 November 2023

This paper is open access.

“Injectable tissue prosthesis coupled with closed-loop bioelectronic system” for damaged muscle/nerve regeneration and robot-assisted rehabilitation

A fascinating new use for hyaluronic acid (usually discussed in relation to cosmetic wrinkle-reduction) has been found according to a November 1, 2023 news item on ScienceDaily.

In a recent publication in the journal Nature, researchers from the Institute of Basic Science (IBS) in South Korea have made significant strides in biomaterial technology and rehabilitation medicine. They’ve developed a novel approach to healing muscle injury by employing “injectable tissue prosthesis” in the form of conductive hydrogels and combining it with a robot-assisted rehabilitation system.

Let’s imagine you are swimming in the ocean. A giant shark approaches and bites a huge chunk of meat out of your thigh, resulting in a complete loss of motor/sensor function in your leg.

If left untreated, such severe muscle damage would result in permanent loss of function and disability.

How on Earth will you be able to recover from this kind of injury?

Traditional rehabilitation methods for these kinds of muscle injuries have long sought an efficient closed-loop gait rehabilitation system that merges lightweight exoskeletons and wearable/implantable devices.

Such assistive prosthetic system is required to aid the patients through the process of recovering sensory and motor functions linked to nerve and muscle damage.

Unfortunately, the mechanical properties and rigid nature of existing electronic materials render them incompatible with soft tissues.

This leads to friction and potential inflammation, stalling patient rehabilitation.

To overcome these limitations, the IBS researchers turned to a material commonly used as a wrinkle-smoothing filler, called hyaluronic acid.

A November 2, 2023 Institute of Basic Science (IBS) press release (also on EurekAlert but published November 1, 2023), which originated the news item, explains how hyaluronic acid helps in tissue rehabilitation and regeneration,

Using this substance [hyaluronic acid], an injectable hydrogel was developed for “tissue prostheses”, which can temporarily fill the gap of the missing muscle/nerve tissues while it regenerates. The injectable nature of this material gives it a significant advantage over traditional bioelectronic devices, which are unsuitable for narrow, deep, or small areas, and necessitate invasive surgeries.

Thanks to its highly “tissue-like” properties, this hydrogel seamlessly interfaces with biological tissues and can be easily administered to hard-to-reach body areas without surgery. The reversible and irreversible crosslinks within the hydrogel adapt to high shear stress during injection, ensuring excellent mechanical stability. This hydrogel also incorporates gold nanoparticles, which gives it decent electrical properties. Its conductive nature allows for the effective transmission of electrophysiological signals between the two ends of injured tissues. In addition, the hydrogel is biodegrdable, meaning that the patients do not need to get surgery again.

With mechanical properties akin to natural tissues, exceptional tissue adhesion, and injectable characteristics, researchers believe this material offers a novel approach to rehabilitation.

Next, the researchers put this novel idea to the test in rodent models. To simulate volumetric muscle loss injury, a large chunk of muscle has been removed from the hind legs of these animals. By injecting the hydrogel and implanting the two kinds of stretchable tissue-interfacing devices for electrical sensing and stimulation, the researchers were able to improve the gait in the “injured” rodents. The hydrogel prosthetics were combined with robot assistance, guided by muscle electromyography signals. Together, the two helped enhance the animal’s gait without nerve stimulation. Furthermore, muscle tissue regeneration was effectively improved over the long term after the conductive hydrogel was used to fill muscle damage.

The injectable conductive hydrogel developed in this study excels in electrophysiological signal recording and stimulation performance, offering the potential to expand its applications. It presents a fresh approach to the field of bioelectronic devices and holds promise as a soft tissue prosthesis for rehabilitation support.

Emphasizing the significance of the research, Professor SHIN Mikyung notes, “We’ve created an injectable, mechanically tough, and electrically conductive soft tissue prosthesis ideal for addressing severe muscle damage requiring neuromusculoskeletal rehabilitation. The development of this injectable hydrogel, utilizing a novel cross-linking method, is a notable achievement. We believe it will be applicable not only in muscles and peripheral nerves but also in various organs like the brain and heart.”

Professor SON Donghee added, “In this study, the closed-loop gait rehabilitation system entailing tough injectable hydrogel and stretchable and self-healing sensors could significantly enhance the rehabilitation prospects for patients with neurological and musculoskeletal challenges. It could also play a vital role in precise diagnosis and treatment across various organs in the human body.”

The research team is currently pursuing further studies to develop new materials for nerve and muscle tissue regeneration that can be implanted in a minimally invasive manner. They are also exploring the potential for recovery in various tissue damages through the injection of the conductive hydrogel, eliminating the need for open surgery.

Here’s a link to and a citation for the paper,

Injectable tissue prosthesis for instantaneous closed-loop rehabilitation by Subin Jin, Heewon Choi, Duhwan Seong, Chang-Lim You, Jong-Sun Kang, Seunghyok Rho, Won Bo Lee, Donghee Son & Mikyung Shin. Nature volume 623, pages 58–65 (2023) DOI: https://doi.org/10.1038/s41586-023-06628-x Published: 01 November 2023 Issue Date: 02 November 2023

This paper is behind a paywall.