Category Archives: human enhancement

Restoring words with a neuroprosthesis

There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark ( spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".

A July 14, 2021 news item on ScienceDaily announces the latest work on a a neuroprosthetic from the University of California at San Francisco (UCSF),

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 [2021] in the New England Journal of Medicine.

A July 14, 2021 UCSF news release (also on EurekAlert), which originated the news item, delves further into the topic,

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Translating Brain Signals into Speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

The First 50 Words

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

Translating Attempted Speech into Text

To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.

UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

Here’s a link to and a citation for the paper,

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021

This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”

Your cyborg future (brain-computer interface) is closer than you think

Researchers at the Imperial College London (ICL) are warning that brain-computer interfaces (BCIs) may pose a number of quandaries. (At the end of this post, I have a little look into some of the BCI ethical issues previously explored on this blog.)

Here’s more from a July 20, 2021American Institute of Physics (AIP) news release (also on EurekAlert),

Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.

Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.

The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.

Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.

“For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”

Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.

“This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”

As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.

“This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”

She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.

“Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” [emphasis mine] Green said.

Caption: A schematic demonstrates the steps required for eBCI operation. EEG sensors acquire electrical signals from the brain, which are processed and outputted to control external devices. Credit: Portillo-Lara et al.

Here’s a link to and a citation for the paper,

Mind the gap: State-of-the-art technologies and applications for EEG-based brain-computer interfaces by Roberto Portillo-Lara, Bogachan Tahirbegi, Christopher A.R. Chapman, Josef A. Goding, and Rylie A. Green. APL Bioengineering, Volume 5, Issue 3, , 031507 (2021) DOI: https://doi.org/10.1063/5.0047237 Published Online: 20 July 2021

This paper appears to be open access.

Back on September 17, 2020 I published a post about a brain implant and included some material I’d dug up on ethics and brain-computer interfaces and was most struck by one of the stories. Here’s the excerpt (which can be found under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead): … From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

This is from another part of the September 17, 2020 posting,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

It wasn’t my first thought when the topic of ethics and BCIs came up but as Gilbert’s research highlights: what happens if the company that made your implant and monitors it goes bankrupt?

If you have the time, do take a look at the entire entry under the “Brain-computer interfaces, symbiosis, and ethical issues” subhead of the September 17, 2020 posting or read the July 24, 2019 article by Liam Drew.

Should you have a problem finding the July 20, 2021 American Institute of Physics news release at either of the two links I have previously supplied, there’s a July 20, 2021 copy at SciTechDaily.com

University of Alberta researchers 3D print nose cartilage

A May 4, 2021 news item on ScienceDaily announced work that may result the restoration of nasal cartilage for skin cancer patients,

A team of University of Alberta researchers has discovered a way to use 3-D bioprinting technology to create custom-shaped cartilage for use in surgical procedures. The work aims to make it easier for surgeons to safely restore the features of skin cancer patients living with nasal cartilage defects after surgery.

The researchers used a specially designed hydrogel — a material similar to Jell-O — that could be mixed with cells harvested from a patient and then printed in a specific shape captured through 3-D imaging. Over a matter of weeks, the material is cultured in a lab to become functional cartilage.

“It takes a lifetime to make cartilage in an individual, while this method takes about four weeks. So you still expect that there will be some degree of maturity that it has to go through, especially when implanted in the body. But functionally it’s able to do the things that cartilage does,” said Adetola Adesida, a professor of surgery in the Faculty of Medicine & Dentistry.

“It has to have certain mechanical properties and it has to have strength. This meets those requirements with a material that (at the outset) is 92 per cent water,” added Yaman Boluk, a professor in the Faculty of Engineering.

Who would have thought that nose cartilage would look like a worm?

Caption: 3-D printed cartilage is shaped into a curve suitable for use in surgery to rebuild a nose. The technology could eventually replace the traditional method of taking cartilage from the patient’s rib, a procedure that comes with complications. Credit: University of Alberta

A May 4, 2021 University of Alberta news release (also on EurekAlert) by Ross Neitz, which originated the news item, details why this research is important,

Adesida, Boluk and graduate student Xiaoyi Lan led the project to create the 3-D printed cartilage in hopes of providing a better solution for a clinical problem facing many patients with skin cancer.

Each year upwards of three million people in North America are diagnosed with non-melanoma skin cancer. Of those, 40 per cent will have lesions on their noses, with many requiring surgery to remove them. As part of the procedure, many patients may have cartilage removed, leaving facial disfiguration.

Traditionally, surgeons would take cartilage from one of the patient’s ribs and reshape it to fit the needed size and shape for reconstructive surgery. But the procedure comes with complications.

“When the surgeons restructure the nose, it is straight. But when it adapts to its new environment, it goes through a period of remodelling where it warps, almost like the curvature of the rib,” said Adesida. “Visually on the face, that’s a problem.

“The other issue is that you’re opening the rib compartment, which protects the lungs, just to restructure the nose. It’s a very vital anatomical location. The patient could have a collapsed lung and has a much higher risk of dying,” he added.

The researchers say their work is an example of both precision medicine and regenerative medicine. Lab-grown cartilage printed specifically for the patient can remove the risk of lung collapse, infection in the lungs and severe scarring at the site of a patient’s ribs.

“This is to the benefit of the patient. They can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them,” said Adesida. “We can even bank the cells and use them later to build everything needed for the surgery. This is what this technology allows you to do.”

The team is continuing its research and is now testing whether the lab-grown cartilage retains its properties after transplantation in animal models. The team hopes to move the work to a clinical trial within the next two to three years.

Here’s a link to and a citation for the paper,

Bioprinting of human nasoseptal chondrocytes-laden collagen hydrogel for cartilage tissue engineering by Xiaoyi Lan, Yan Liang, Esra J. N. Erkut, Melanie Kunze, Aillette Mulet-Sierra, Tianxing Gong, Martin Osswald, Khalid Ansari, Hadi Seikaly, Yaman Boluk, Adetola B. Adesida. The FASEB Journal Volume 35, Issue 3 March 2021 e21191 DOI: https://doi.org/10.1096/fj.202002081R First published online: 17 February 2021

This paper is open access.

Synaptic transistor better then memristor when it comes to brainlike learning for computers

An April 30, 2021 news item on Nanowerk announced research from a joint team at Northwestern University (located in Chicago, Illinois, US) and University of Hong Kong of researchers in the field of neuromorphic (brainlike) computing,

Researchers have developed a brain-like computing device that is capable of learning by association.

Similar to how famed physiologist Ivan Pavlov conditioned dogs to associate a bell with food, researchers at Northwestern University and the University of Hong Kong successfully conditioned their circuit to associate light with pressure.

The device’s secret lies within its novel organic, electrochemical “synaptic transistors,” which simultaneously process and store information just like the human brain. The researchers demonstrated that the transistor can mimic the short-term and long-term plasticity of synapses in the human brain, building on memories to learn over time.

With its brain-like ability, the novel transistor and circuit could potentially overcome the limitations of traditional computing, including their energy-sapping hardware and limited ability to perform multiple tasks at the same time. The brain-like device also has higher fault tolerance, continuing to operate smoothly even when some components fail.

“Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration,” said Northwestern’s Jonathan Rivnay, a senior author of the study. “This is thanks to the plasticity of the synapse, which is the basic building block of the brain’s computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner. In our work, we demonstrate an organic, plastic transistor that mimics key functions of a biological synapse.”

Rivnay is an assistant professor of biomedical engineering at Northwestern’s McCormick School of Engineering. He co-led the study with Paddy Chan, an associate professor of mechanical engineering at the University of Hong Kong. Xudong Ji, a postdoctoral researcher in Rivnay’s group, is the paper’s first author.

Caption: By connecting single synaptic transistors into a neuromorphic circuit, researchers demonstrated that their device could simulate associative learning. Credit: Northwestern University

An April 30, 2021 Northwestern University news release (also on EurekAlert), which originated the news item, includes a good explanation about brainlike computing and information about how synaptic transistors work along with some suggestions for future applications,

Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to consume large amounts of energy. Inspired by the combined computing and storage process in the human brain, researchers, in recent years, have sought to develop computers that operate more like the human brain, with arrays of devices that function like a network of neurons.

“The way our current computer systems work is that memory and logic are physically separated,” Ji said. “You perform computation and send that information to a memory unit. Then every time you want to retrieve that information, you have to recall it. If we can bring those two separate functions together, we can save space and save on energy costs.”

Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function, but memristors suffer from energy-costly switching and less biocompatibility. These drawbacks led researchers to the synaptic transistor — especially the organic electrochemical synaptic transistor, which operates with low voltages, continuously tunable memory and high compatibility for biological applications. Still, challenges exist.

“Even high-performing organic electrochemical synaptic transistors require the write operation to be decoupled from the read operation,” Rivnay said. “So if you want to retain memory, you have to disconnect it from the write process, which can further complicate integration into circuits or systems.”

How the synaptic transistor works

To overcome these challenges, the Northwestern and University of Hong Kong team optimized a conductive, plastic material within the organic, electrochemical transistor that can trap ions. In the brain, a synapse is a structure through which a neuron can transmit signals to another neuron, using small molecules called neurotransmitters. In the synaptic transistor, ions behave similarly to neurotransmitters, sending signals between terminals to form an artificial synapse. By retaining stored data from trapped ions, the transistor remembers previous activities, developing long-term plasticity.

The researchers demonstrated their device’s synaptic behavior by connecting single synaptic transistors into a neuromorphic circuit to simulate associative learning. They integrated pressure and light sensors into the circuit and trained the circuit to associate the two unrelated physical inputs (pressure and light) with one another.

Perhaps the most famous example of associative learning is Pavlov’s dog, which naturally drooled when it encountered food. After conditioning the dog to associate a bell ring with food, the dog also began drooling when it heard the sound of a bell. For the neuromorphic circuit, the researchers activated a voltage by applying pressure with a finger press. To condition the circuit to associate light with pressure, the researchers first applied pulsed light from an LED lightbulb and then immediately applied pressure. In this scenario, the pressure is the food and the light is the bell. The device’s corresponding sensors detected both inputs.

After one training cycle, the circuit made an initial connection between light and pressure. After five training cycles, the circuit significantly associated light with pressure. Light, alone, was able to trigger a signal, or “unconditioned response.”

Future applications

Because the synaptic circuit is made of soft polymers, like a plastic, it can be readily fabricated on flexible sheets and easily integrated into soft, wearable electronics, smart robotics and implantable devices that directly interface with living tissue and even the brain [emphasis mine].

“While our application is a proof of concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation,” Rivnay said. “Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics.”

Here’s a link to and a citation for the paper,

Mimicking associative learning using an ion-trapping non-volatile synaptic organic electrochemical transistor by Xudong Ji, Bryan D. Paulsen, Gary K. K. Chik, Ruiheng Wu, Yuyang Yin, Paddy K. L. Chan & Jonathan Rivnay . Nature Communications volume 12, Article number: 2480 (2021) DOI: https://doi.org/10.1038/s41467-021-22680-5 Published: 30 April 2021

This paper is open access.

“… devices that directly interface with living tissue and even the brain,” would I be the only one thinking about cyborgs?

The Internet of Bodies and Ghislaine Boddington

I stumbled across this event on my Twitter feed (h/t @katepullinger; Note: Kate Pullinger is a novelist and Professor of Creative Writing and Digital Media, Director of the Centre for Cultural and Creative Industries [CCCI] at Bath Spa University in the UK).

Anyone who visits here with any frequency will have noticed I have a number of articles on technology and the body (you can find them in the ‘human enhancement’ category and/or search fro the machine/flesh tag). Boddington’s view is more expansive than the one I’ve taken and I welcome it. First, here’s the event information and, then, a link to her open access paper from February 2021.

From the CCCI’s Annual Public Lecture with Ghislaine Boddington eventbrite page,

This year’s CCCI Public Lecture will be given by Ghislaine Boddington. Ghislaine is Creative Director of body>data>space and Reader in Digital Immersion at University of Greenwich. Ghislaine has worked at the intersection of the body, the digital, and spatial research for many years. This will be her first in-person appearance since the start of the pandemic, and she will share with us the many insights she has gathered during this extraordinary pivot to online interfaces much of the world has been forced to undertake.

With a background in performing arts and body technologies, Ghislaine is recognised as a pioneer in the exploration of digital intimacy, telepresence and virtual physical blending since the early 90s. As a curator, keynote speaker and radio presenter she has shared her outlook on the future human into the cultural, academic, creative industries and corporate sectors worldwide, examining topical issues with regards to personal data usage, connected bodies and collective embodiment. Her research led practice, examining the evolution of the body as the interface, is presented under the heading ‘The Internet of Bodies’. Recent direction and curation outputs include “me and my shadow” (Royal National Theatre 2012), FutureFest 2015-18 and Collective Reality (Nesta’s FutureFest / SAT Montreal 2016/17). In 2017 Ghislaine was awarded the international IX Immersion Experience Visionary Pioneer Award. She recently co-founded University of Greenwich Strategic Research Group ‘CLEI – Co-creating Liveness in Embodied Immersion’ and is an Associate Editor for AI & Society (Springer). Ghislaine is a long term advocate for diversity and inclusion, working as a Trustee for Stemette Futures and Spokesperson for Deutsche Bank ‘We in Social Tech’ initiative. She is a team member and presenter with BBC World Service flagship radio show/podcast Digital Planet.

Date and time

Thu, 24 June 2021
08:00 – 09:00 [am] PDT

@GBoddington

@bodydataspace

@ConnectedBodies

Boddington’s paper is what ignited my interest; here’s a link to and a citation for it,

The Internet of Bodies—alive, connected and collective: the virtual physical future of our bodies and our senses by Ghislaine Boddington. AI Soc. 2021 Feb 8 : 1–17. DOI: 10.1007/s00146-020-01137-1 PMCID: PMC7868903 PMID: 33584018

Some excerpts from this open access paper,

The Weave—virtual physical presence design—blending processes for the future

Coming from a performing arts background, dance led, in 1989, I became obsessed with the idea that there must be a way for us to be able to create and collaborate in our groups, across time and space, whenever we were not able to be together physically. The focus of my work, as a director, curator and presenter across the last 30 years, has been on our physical bodies and our data selves and how they have, through the extended use of our bodies into digitally created environments, started to merge and converge, shifting our relationship and understanding of our identity and our selfhood.

One of the key methodologies that I have been using since the mid-1990s is inter-authored group creation, a process we called The Weave (Boddington 2013a, b). It uses the simple and universal metaphor of braiding, plaiting or weaving three strands of action and intent, these three strands being:

1. The live body—whether that of the performer, the participant, or the public;

2. The technologies of today—our tools of virtually physical reflection;

3. The content—the theme in exploration.

As with a braid or a plait, the three strands must be weaved simultaneously. What is key to this weave is that in any co-creation between the body and technology, the technology cannot work without the body; hence, there will always be virtual/physical blending. [emphasis mine]

Cyborgs

Cyborg culture is also moving forward at a pace with most countries having four or five cyborgs who have reached out into media status. Manel Munoz is the weather man as such, fascinated and affected by cyclones and anticyclones, his back of the head implant sent vibrations to different sides of his head linked to weather changes around him.

Neil Harbisson from Northern Ireland calls himself a trans-species rather than a cyborg, because his implant is permanently fused into the crown of his head. He is the first trans-species/cyborg to have his passport photo accepted as he exists with his fixed antenna. Neil has, from birth, an eye condition called greyscale, which means he only sees the world in grey and white. He uses his antennae camera to detect colour, and it sends a vibration with a different frequency for each colour viewed. He is learning what colours are within his viewpoint at any given time through the vibrations in his head, a synaesthetic method of transference of one sense for another. Moon Ribas, a Spanish choreographer and a dancer, had two implants placed into the top of her feet, set to sense seismic activity as it occurs worldwide. When a small earthquake occurs somewhere, she received small vibrations; a bigger eruption gives her body a more intense vibration. She dances as she receives and reacts to these transferred data. She feels a need to be closer to our earth, a part of nature (Harbisson et al. 2018).

Medical, non medical and sub-dermal implants

Medical implants, embedded into the body or subdermally (nearer the surface), have rapidly advanced in the last 30 years with extensive use of cardiac pacemakers, hip implants, implantable drug pumps and cochlear implants helping partial deaf people to hear.

Deep body and subdermal implants can be personalised to your own needs. They can be set to transmit chosen aspects of your body data outwards, but they also can receive and control data in return. There are about 200 medical implants in use today. Some are complex, like deep brain stimulation for motor neurone disease, and others we are more familiar with, for example, pacemakers. Most medical implants are not digitally linked to the outside world at present, but this is in rapid evolution.

Kevin Warwick, a pioneer in this area, has interconnected himself and his partner with implants for joint use of their personal and home computer systems through their BrainGate (Warwick 2008) implant, an interface between the nervous system and the technology. They are connected bodies. He works onwards with his experiments to feel the shape of distant objects and heat through fingertip implants.

‘Smart’ implants into the brain for deep brain stimulation are in use and in rapid advancement. The ethics of these developments is under constant debate in 2020 and will be onwards, as is proved by the mass coverage of the Neuralink, Elon Musk’s innovation which connects to the brain via wires, with the initial aim to cure human diseases such as dementia, depression and insomnia and onwards plans for potential treatment of paraplegia (Musk 2016).

Given how many times I’ve featured art/sci (also know as, art/science and/or sciart) and cyborgs and medical implants here, my excitement was a given.

*ETA December 28,2021: Boddington’s lecture was posted here on July 27, 2021.*

For anyone who wants to pursue Boddington’s work further, her eponymous website is here, the body>data>space is here, and her University of Greenwich profile page is here.

For anyone interested in the Centre for Creative and Cultural Industries (CCCI), their site is here.

Finally, here’s one of my earliest pieces about cyborgs titled ‘My mother is a cyborg‘ from April 20, 2012 and my September 17, 2020 posting titled, ‘Turning brain-controlled wireless electronic prostheses into reality plus some ethical points‘. If you scroll down to the ‘Brain-computer interfaces, symbiosis, and ethical issues’ subhead, you’ll find some article excerpts about a fascinating qualitative study on implants and ethics.

Deus Ex, a video game developer, his art, and reality

The topics of human enhancement and human augmentation have been featured here a number of times from a number of vantage points, including that of a video game seires with some thoughtful story lines known under the Deus Ex banner. (My August 18, 2011 posting, . August 30, 2011 posting, and Sept. 1, 2016 posting are three, which mention Deus Ex in the title but there may be others where the game is noted in the posting.)

A March 19, 2021 posting by Timothy Geigner for Techdirt offers a more fulsome but still brief description of the games along with a surprising declaration (it’s too real) by the game’s creator (Note: Links have been removed),

The Deus Ex franchise has found its way onto Techdirt’s pages a couple of times in the past. If you’re not familiar with the series, it’s a cyberpunk-ish take on the near future with broad themes around human augmentation, and the weaving of broad and famous conspiracy theories. That perhaps makes it somewhat ironic that several of our posts dealing with the franchise have to do with mass media outlets getting confused into thinking its augmentation stories were real life, or the conspiracy theories that centered around leaks for the original game’s sequel were true. The conspiracy theories woven into the original Deus Ex storyline were of the grand variety: takeover of government by biomedical companies pushing a vaccine for a sickness it created, the illuminati, FEMA [US Federal Emergency Management Agency] takeovers, AI-driven surveillance of the public, etc.

And it’s the fact that such conspiracy-driven thinking today led Warren Spector, the creator of the series, to recently state that he probably wouldn’t have created the game today if given the chance. [See pull quote below]

Deus Ex was originally released in 2000 but took place in an alternate 2052 where many of the real world conspiracy theories have come true. The plot included references to vaccinations, black helicopters, FEMA, and ECHELON amongst others, some of which have connotations to real-life events. Spector said, “Interestingly, I’m not sure I’d make Deus Ex today. The conspiracy theories we wrote about are now part of the real world. I don’t want to support that.”

… I’d like to focus on how clearly this illustrates the artistic nature of video games. The desire, or not, to create certain kinds of art due to the reflection such art receives from the broader society is exactly the kind of thing artists operating in other artforms have to deal with. Art imitates life, yes, but in the case of speculative fiction like this, it appears that life can also imitate art. Spector notes that seeing what has happened in the world since Deus Ex was first released in 2000 has had a profound effect on him as an artist. [See pull quote below]

Earlier, Spector had commented on how he was “constantly amazed at how accurate our view of the world ended up being. Frankly it freaks me out a bit.” Some of the conspiracy theories that didn’t end up in the game were those surrounding Denver Airport because they were considered “too silly to include in the game.” These include theories about secret tunnels, connections to aliens and Nazi secret societies, and hidden messages within the airport’s artwork. Spector is now incredulous that they’re “something people actually believe.”

It was possible for Geigner even back to an Oct. 18, 2013 posting to write about a UK newspaper that confused Deus Ex with reality,

… I bring you the British tabloid, The Sun, and their amazing story about an augmented mechanical eyeball that, if associated material is to be believed, allows you to see through walls, color-codes friends and enemies, and permits telescopic zoom. Here’s the reference from The Sun.

Oops. See, part of the reason that Sarif Industries’ cybernetic implants are still in their infancy is that the company doesn’t exist. Sarif Industries is a fictitious company from a cyberpunk video game, Deus Ex, set in a future Detroit. …

There’s more about Spector’s latest comments at a 2021 Game Developers Conference in a March 15, 2021 article by Riley MacLeod for Kotaku. There’s more about Warren Spector here. I always thought Deus Ex was developed by Canadian company, Eidos Montréal and, fter reading the company’s Wikipedia entry, it seems I may have been only partially correct.

Getting back to Deus Ex being ‘too real’, it seems to me that the line between science fiction and reality is increasingly frayed.

BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI)

I wrote about some brain computer interface (BCI) work out of Stanford University (California, US), in a Sept. 17, 2020 posting (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points), which may have contributed to what is now the first demonstration of a wireless brain-computer interface for people with tetraplegia (also known as quadriplegia).

From an April 1, 2021 news item on ScienceDaily,

In an important step toward a fully implantable intracortical brain-computer interface system, BrainGate researchers demonstrated human use of a wireless transmitter capable of delivering high-bandwidth neural signals.

Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their own bodies. For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.

Now, for the first time, BrainGate clinical trial participants with tetraplegia have demonstrated use of an intracortical wireless BCI with an external wireless transmitter. The system is capable of transmitting brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user’s head and connects to an electrode array within the brain’s motor cortex using the same port used by wired systems.

For a study published in IEEE Transactions on Biomedical Engineering, two clinical trial participants with paralysis used the BrainGate system with a wireless transmitter to point, click and type on a standard tablet computer. The study showed that the wireless system transmitted signals with virtually the same fidelity as wired systems, and participants achieved similar point-and-click accuracy and typing speeds.

A March 31, 2021 Brown University news release (also on EurekAlert but published April 1, 2021), which originated the news item, provides more detail,

“We’ve demonstrated that this wireless system is functionally equivalent to the wired systems that have been the gold standard in BCI performance for years,” said John Simeral, an assistant professor of engineering (research) at Brown University, a member of the BrainGate research consortium and the study’s lead author. “The signals are recorded and transmitted with appropriately similar fidelity, which means we can use the same decoding algorithms we used with wired equipment. The only difference is that people no longer need to be physically tethered to our equipment, which opens up new possibilities in terms of how the system can be used.”

The researchers say the study represents an early but important step toward a major objective in BCI research: a fully implantable intracortical system that aids in restoring independence for people who have lost the ability to move. While wireless devices with lower bandwidth have been reported previously, this is the first device to transmit the full spectrum of signals recorded by an intracortical sensor. That high-broadband wireless signal enables clinical research and basic human neuroscience that is much more difficult to perform with wired BCIs.

The new study demonstrated some of those new possibilities. The trial participants — a 35-year-old man and a 63-year-old man, both paralyzed by spinal cord injuries — were able to use the system in their homes, as opposed to the lab setting where most BCI research takes place. Unencumbered by cables, the participants were able to use the BCI continuously for up to 24 hours, giving the researchers long-duration data including while participants slept.

“We want to understand how neural signals evolve over time,” said Leigh Hochberg, an engineering professor at Brown, a researcher at Brown’s Carney Institute for Brain Science and leader of the BrainGate clinical trial. “With this system, we’re able to look at brain activity, at home, over long periods in a way that was nearly impossible before. This will help us to design decoding algorithms that provide for the seamless, intuitive, reliable restoration of communication and mobility for people with paralysis.”

The device used in the study was first developed at Brown in the lab of Arto Nurmikko, a professor in Brown’s School of Engineering. Dubbed the Brown Wireless Device (BWD), it was designed to transmit high-fidelity signals while drawing minimal power. In the current study, two devices used together recorded neural signals at 48 megabits per second from 200 electrodes with a battery life of over 36 hours.

While the BWD has been used successfully for several years in basic neuroscience research, additional testing and regulatory permission were required prior to using the system in the BrainGate trial. Nurmikko says the step to human use marks a key moment in the development of BCI technology.

“I am privileged to be part of a team pushing the frontiers of brain-machine interfaces for human use,” Nurmikko said. “Importantly, the wireless technology described in our paper has helped us to gain crucial insight for the road ahead in pursuit of next generation of neurotechnologies, such as fully implanted high-density wireless electronic interfaces for the brain.”

The new study marks another significant advance by researchers with the BrainGate consortium, an interdisciplinary group of researchers from Brown, Stanford and Case Western Reserve universities, as well as the Providence Veterans Affairs Medical Center and Massachusetts General Hospital. In 2012, the team published landmark research in which clinical trial participants were able, for the first time, to operate multidimensional robotic prosthetics using a BCI. That work has been followed by a steady stream of refinements to the system, as well as new clinical breakthroughs that have enabled people to type on computers, use tablet apps and even move their own paralyzed limbs.

“The evolution of intracortical BCIs from requiring a wire cable to instead using a miniature wireless transmitter is a major step toward functional use of fully implanted, high-performance neural interfaces,” said study co-author Sharlene Flesher, who was a postdoctoral fellow at Stanford and is now a hardware engineer at Apple. “As the field heads toward reducing transmitted bandwidth while preserving the accuracy of assistive device control, this study may be one of few that captures the full breadth of cortical signals for extended periods of time, including during practical BCI use.”

The new wireless technology is already paying dividends in unexpected ways, the researchers say. Because participants are able to use the wireless device in their homes without a technician on hand to maintain the wired connection, the BrainGate team has been able to continue their work during the COVID-19 pandemic.

“In March 2020, it became clear that we would not be able to visit our research participants’ homes,” said Hochberg, who is also a critical care neurologist at Massachusetts General Hospital and director of the V.A. Rehabilitation Research and Development Center for Neurorestoration and Neurotechnology. “But by training caregivers how to establish the wireless connection, a trial participant was able to use the BCI without members of our team physically being there. So not only were we able to continue our research, this technology allowed us to continue with the full bandwidth and fidelity that we had before.”

Simeral noted that, “Multiple companies have wonderfully entered the BCI field, and some have already demonstrated human use of low-bandwidth wireless systems, including some that are fully implanted. In this report, we’re excited to have used a high-bandwidth wireless system that advances the scientific and clinical capabilities for future systems.”

Brown has a licensing agreement with Blackrock Microsystems to make the device available to neuroscience researchers around the world. The BrainGate team plans to continue to use the device in ongoing clinical trials.

Here’s a link to and a citation for the paper,

Home Use of a Percutaneous Wireless Intracortical Brain-Computer Interface by Individuals With Tetraplegia by John D Simeral, Thomas Hosman, Jad Saab, Sharlene N Flesher, Marco Vilela, Brian Franco, Jessica Kelemen, David M Brandman, John G Ciancibello, Paymon G Rezaii, Emad N. Eskandar, David M Rosler, Krishna V Shenoy, Jaimie M. Henderson, Arto V Nurmikko, Leigh R. Hochberg. IEEE Transactions on Biomedical Engineering, 2021; 1 DOI: 10.1109/TBME.2021.3069119 Date of Publication: 30 March 2021

This paper is open access.

If you don’t happen to be familiar with the IEEE, it’s the Institute of Electrical and Electronics Engineers. BrainGate can be found here, and Blackrock Microsystems can be found here.

The first story here to feature BrainGate was in a May 17, 2012 posting. (Unfortunately, the video featuring a participant picking up a cup of coffee is no longer embedded in the post.) There’s also an October 31, 2016 posting and an April 24, 2017 posting, both of which mention BrainGate. As for my Sept. 17, 2020 posting (Turning brain-controlled wireless electronic prostheses into reality plus some ethical points), you may want to look at those ethical points.

Toronto’s ArtSci Salon and The Mutant Project March 15, 2021

The Mutant Project is both a book (The Mutant Project: Inside the Global Race to Genetically Modify Humans) and an event about gene editing with special reference to the CRISPR (clustered regularly interspaced short palindromic repeats) twins, Lulu and Nana. The event is being held by Toronto’s ArtSci Salon. Here’s more from their March 3, 2021 announcement (received via email),

The Mutant Project

A talk and discussion with Eben Kirksey

Discussants:

Dr. Elizabeth Koester, Postdoctoral fellow, Department of History, UofT [University of Toronto]

Vincent Auffrey, PhD student, IHPST [Institute for the History and Philosophy of Science and Technology], UofT

Fan Zhang, PhD student, IHPST, UofT

This event will be streamed on Zoom and on Youtube

Here is the link to register to Zoom on the 15th:

https://utoronto.zoom.us/meeting/registe/tZErcemoqzwrG9foNF5Ud86uJXdNeIzQSfDw

Event on FB: https://www.facebook.com/events/4033393163381012/

YouTube: https://www.youtube.com/watch?v=pEFfj3Ovsfk&feature=youtu.be

At a conference in Hong Kong in November 2018, Dr. He Jiankui announced that he had created the first genetically modified babies—twin girls named Lulu and Nana—sending shockwaves around the world. A year later, a Chinese court sentenced Dr. He to three years in prison for “illegal medical practice.”

As scientists elsewhere start to catch up with China’s vast genetic research program, gene editing is fueling an innovation economy that threatens to widen racial and economic inequality. Fundamental questions about science, health, and social justice are at stake: Who gets access to gene editing technologies? As countries loosen regulations around the globe, from the U.S. to Indonesia, can we shape research agendas to promote an ethical and fair society?

Join us to welcome Dr. Kirksey, who will discuss key topics from his book “The Mutant Project”.

The talk will be followed by a Q&A

EBEN KIRKSEY is an American anthropologist who finished his latest book as a Member of the Institute for Advanced Study in Princeton, New Jersey. He has been published in Wired, The Atlantic, The Guardian and The Sunday Times. He is sought out as an expert on science in society by the Associated Press, The Wall Street Journal, The New York Times, Democracy Now, Time and the BBC, among other media outlets. He speaks widely at the world’s leading academic institutions including Oxford, Yale, Columbia, UCLA, and the International Summit of Human Genome Editing, plus music festivals, art exhibits, and community events. Professor Kirksey holds a long-term position at Deakin University in Melbourne, Australia. For more information, please visit https://eben-kirksey.space/.

Elizabeth Koester currently holds a SSHRC [Social Science and Humanities Research Council of Canada] Postdoctoral Fellowship in the Department of History at the University of Toronto. After practising law for many years, she undertook graduate studies in the history of medicine at the Institute for the History and Philosophy of
Science and Technology at the University of Toronto and was awarded a PhD in 2018. A book based on her dissertation, In the Public Good: Eugenics and Law in Ontario, will be published by McGill-Queen’s University Press and is anticipated for Fall 2021.

Vincent Auffrey is pursuing his PhD at the Institute for the History of Philosophy of Science and Technology (IHPST) at the University of Toronto. His focus is set primarily on the social history of medicine and the history of eugenics in Canada. Secondary interests include the histories of scientific racism and of anatomy, and the interplay between knowledge and power.

Fan Zhang is a PhD student at the History of Philosophy of Science and Technology (IHPST) at the University of Toronto

Kirksey’s eponymous website,

Water and minerals have a nanoscale effect on bones

Courtesy: University of Arkansas

What a great image of bones! This December 3, 2020 University of Arkansas news release (also on EurekAlert) by Matt McGowan features research focused on bone material looks exciting. The date for the second study citation and link that I have listed (at the end of this posting) suggests the more recent study may have been initially overlooked in the deluge of COVID-19 research we are experiencing,

University of Arkansas researchers Marco Fielder and Arun Nair have conducted the first study of the combined nanoscale effects of water and mineral content on the deformation mechanisms and thermal properties of collagen, the essence of bone material.

The researchers also compared the results to the same properties of non-mineralized collagen reinforced with carbon nanotubes, which have shown promise as a reinforcing material for bio-composites. This research aids in the development of synthetic materials to mimic bone.

Using molecular dynamics — in this case a computer simulation of the physical movements of atoms and molecules — Nair and Fielder examined the mechanics and thermal properties of collagen-based bio-composites containing different weight percentages of minerals, water and carbon nanotubes when subjected to external loads.

They found that variations of water and mineral content had a strong impact on the mechanical behavior and properties of the bio-composites, the structure of which mimics nanoscale bone composition. With increased hydration, the bio-composites became more vulnerable to stress. Additionally, Nair and Fielder found that the presence of carbon nanotubes in non-mineralized collagen reduced the deformation of the gap regions.

The researchers also tested stiffness, which is the standard measurement of a material’s resistance to deformation. Both mineralized and non-mineralized collagen bio-composites demonstrated less stability with greater water content. Composites with 40% mineralization were twice as strong as those without minerals, regardless of the amount of water content. Stiffness of composites with carbon nanotubes was comparable to that of the mineralized collagen.

“As the degree of mineralization or carbon nanotube content of the collagenous bio-composites increased, the effect of water to change the magnitude of deformation decreased,” Fielder said.

The bio-composites made of collagen and carbon nanotubes were also found to have a higher specific heat than the studied mineralized collagen bio-composites, making them more likely to be resistant to thermal damage that could occur during implantation or functional use of the composite. Like most biological materials, bone is a hierarchical – with different structures at different length scales. At the microscale level, bone is made of collagen fibers, composed of smaller nanofibers called fibrils, which are a composite of collagen proteins, mineralized crystals called apatite and water. Collagen fibrils overlap each other in some areas and are separated by gaps in other areas.

“Though several studies have characterized the mechanics of fibrils, the effects of variation and distribution of water and mineral content in fibril gap and overlap regions are unexplored,” said Nair, who is an associate professor of mechanical engineering. “Exploring these regions builds an understanding of the structure of bone, which is important for uncovering its material properties. If we understand these properties, we can design and build better bio-inspired materials and bio-composites.”

Here are links and citations for both papers mentioned in the news release,

Effects of hydration and mineralization on the deformation mechanisms of collagen fibrils in bone at the nanoscale by Marco Fielder & Arun K. Nair. Biomechanics and Modeling in Mechanobiology volume 18, pages57–68 (2019) Biomech Model Mechanobiol 18, 57–68 (2019). DOI: https://doi.org/10.1007/s10237-018-1067-y First published: 07 August 2018 Issue Date: 15 February 2019

This paper is behind a paywall.

A computational study of mechanical properties of collagen-based bio-composites by Marco Fielder & Arun K. Nair. International Biomechanics Volume 7, 2020 – Issue 1 Pages 76-87 DOI: https://doi.org/10.1080/23335432.2020.1812428 Published online: 02 Sep 2020

This paper is open access.

A fully implantable wireless medical device for patients with severe paralysis

There have been only two people who have tested the device from Australia but the research raises hope, from an Oct, 28, 2020 news item on ScienceDaily,

A tiny device the size of a small paperclip has been shown to help patients with upper limb paralysis to text, email and even shop online in the first human trial.

The device, Stentrode™, has been implanted successfully in two patients, who both suffer from severe paralysis due to amyotrophic lateral sclerosis (ALS) — also known as motor neuron disease (MND) — and neither had the ability to move their upper limbs.

Published in the Journal of NeuroInterventional Surgery, the results found the Stentrode™ was able to wirelessly restore the transmission of brain impulses out of the body. This enabled the patients to successfully complete daily tasks such as online banking, shopping and texting, which previously had not been available to them.

An Oct. 28, 2020 University of Melbourne press release (also on EurekAlert), which originated the news item, fills in some of the detail,

The Royal Melbourne Hospital’s Professor Peter Mitchell, Neurointervention Service Director and principal investigator on the trial, said the findings were promising and demonstrate the device can be safely implanted and used within the patients.

“This is the first time an operation of this kind has been done, so we couldn’t guarantee there wouldn’t be problems, but in both cases the surgery has gone better than we had hoped,” Professor Mitchell said.

Professor Mitchell implanted the device on the study participants through their blood vessels, next to the brain’s motor cortex, in a procedure involving a small ‘keyhole’ incision in the neck.

“The procedure isn’t easy, in each surgery there were differences depending on the patient’s anatomy, however in both cases the patients were able to leave the hospital only a few days later, which also demonstrates the quick recovery from the surgery,” Professor Mitchell said.

Neurointerventionalist and CEO of Synchron – the research commercial partner – Associate Professor Thomas Oxley, said this was a breakthrough moment for the field of brain-computer interfaces.

“We are excited to report that we have delivered a fully implantable, take home, wireless technology that does not require open brain surgery, which functions to restore freedoms for people with severe disability,” Associate Professor Oxley, who is also co-head of the Vascular Bionics Laboratory at the University of Melbourne, said.

The two patients used the Stentrode™ to control the computer-based operating system, in combination with an eye-tracker for cursor navigation. This meant they did not need a mouse or keyboard.

They also undertook machine learning-assisted training to control multiple mouse click actions, including zoom and left click. The first two patients achieved an average click accuracy of 92 per cent and 93 per cent, respectively, and typing speeds of 14 and 20 characters per minute with predictive text disabled.

University of Melbourne Associate Professor Nicholas Opie, co-head of the Vascular Bionics Laboratory at the University and founding chief technology officer of Synchron said the developments were exciting and the patients involved had a level of freedom restored in their lives.

“Observing the participants use the system to communicate and control a computer with their minds, independently and at home, is truly amazing,” Associate Professor Opie said.

“We are thankful to work with such fantastic participants, and my colleagues and I are honoured to make a difference in their lives. I hope others are inspired by their success.

“Over the last eight years we have drawn on some of the world’s leading medical and engineering minds to create an implant that enables people with paralysis to control external equipment with the power of thought. We are pleased to report that we have achieved this.”

The researchers caution that while it is some years away before the technology, capable of returning independence to complete everyday tasks is publicly available, the global, multidisciplinary team is working tirelessly to make this a reality.

The trial recently received a $AU1.48 million grant from the Australian commonwealth government to expand the trial to hospitals in New South Wales and Queensland, with hopes to enrol more patients.

###

About Stentrode™

Stentrode™ was developed by researchers from the University of Melbourne, the Royal Melbourne Hospital, the Florey Institute of Neuroscience and Mental Health, Monash University and the company Synchron Australia – the corporate vehicle established by Associate Professors Thomas Oxley (CEO) and Nicholas Opie (CTO) that aims to develop and commercialise neural bionics technology and products. It draws on some of the world’s leading medical and engineering minds

There’s a little more detail and information in an Oct. 28, 2020 Society of NeuroInterventional Surgery news release on EurekAlert,

Researchers demonstrated the success of a fully implantable wireless medical device, the Stentrode™ brain-computer interface (BCI), designed to allow patients with severe paralysis to resume daily tasks — including texting, emailing, shopping and banking online — without the need for open brain surgery. The first-in-human study was published in the Journal of NeuroInterventional Surgery™, the leading international peer-reviewed journal for the clinical field of neurointerventional surgery.

The patients enrolled in the study utilized the Stentrode neuroprosthesis to control the Microsoft Windows 10 operating system in combination with an eye-tracker for cursor navigation, without a mouse or keyboard. The subjects undertook machine learning-assisted training to control multiple mouse-click actions, including zoom and left click.

“This is a breakthrough moment for the field of brain-computer interfaces. We are excited to report that we have delivered a fully implantable, take home, wireless technology that does not require open brain surgery, which functions to restore freedoms for people with severe disability,” said Thomas Oxley, MD, PhD, and CEO of Synchron, a neurovascular bioelectronics medicine company that conducted the research. “Seeing these first heroic patients resume important daily tasks that had become impossible, such as using personal devices to connect with loved ones, confirms our belief that the Stentrode will one day be able to help millions of people with paralysis.”[1]

Graham Felstead, a 75-year-old man living at home with his wife, has experienced severe paralysis due to amyotrophic lateral sclerosis (ALS). He was the first patient enrolled in the first Stentrode clinical study and the first person to have any BCI implanted via the blood vessels. He received the Stentrode implant in August 2019. With the Stentrode, Felstead was able to remotely contact his spouse, increasing his autonomy and reducing her burden of care. Philip O’Keefe, a 60-year-old man with ALS who works part time, was able to control computer devices to conduct work-related tasks and other independent activities after receiving the Stentrode in April 2020. Functional impairment to his fingers, elbows and shoulders had previously inhibited his ability to engage in these efforts.

The Stentrode device is small and flexible enough to safely pass through curving blood vessels, so the implantation procedure is similar to that of a pacemaker and does not require open brain surgery. Entry through the blood vessels may reduce risk of brain tissue inflammation and rejection of the device, which has been an issue for techniques that require direct brain penetration. Implantation is conducted using well-established neurointerventional techniques that do not require any novel automated robotic assistance.

Here’s a link to and a citation for the paper,

Motor neuroprosthesis implanted with neurointerventional surgery improves capacity for activities of daily living tasks in severe paralysis: first in-human experience by Thomas J Oxley, Peter E Yoo, Gil S Rind, Stephen M Ronayne, C M Sarah Lee, Christin Bird, Victoria Hampshire, Rahul P Sharma, Andrew Morokoff, Daryl L Williams, Christopher MacIsaac, Mark E Howard, Lou Irving, Ivan Vrljic, Cameron Williams, Sam E John, Frank Weissenborn, Madeleine Dazenko, Anna H Balabanski, David Friedenberg, Anthony N Burkitt, Yan T Wong, Katharine J Drummond, Patricia Desmond, Douglas Weber, Timothy Denison, Leigh R Hochberg, Susan Mathers, Terence J O’Brien, Clive N May, J Mocco, David B Grayden, Bruce C V Campbell, Peter Mitchell, Nicholas L Opie. Journal of Neurointerventional Surgery, DOI: http://dx.doi.org/10.1136/neurintsurg-2020-016862 Published Online First: 28 October 2020

This paper is open access.