Tag Archives: UC San Francisco

Restoring words with a neuroprosthesis

There seems to have been an update to the script for the voiceover. You’ll find it at the 1 min. 30 secs. mark (spoken: “with up to 93% accuracy at 18 words per minute`’ vs. written “with median 74% accuracy at 15 words per minute)".

A July 14, 2021 news item on ScienceDaily announces the latest work on a neuroprosthetic from the University of California at San Francisco (UCSF),

Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.

The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 [2021] in the New England Journal of Medicine.

A July 14, 2021 UCSF news release (also on EurekAlert), which originated the news item, delves further into the topic,

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.

Translating Brain Signals into Speech

Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

Over the past decade, Chang’s progress toward this goal was facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. These patients, all of whom had normal speech, volunteered to have their brain recordings analyzed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Previously, Chang and colleagues in the UCSF Weill Institute for Neurosciences mapped the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. To translate those findings into speech recognition of full words, David Moses, PhD, a postdoctoral engineer in the Chang lab and lead author of the new study, developed new methods for real-time decoding of those patterns, as well as incorporating statistical language models to improve accuracy.

But their success in decoding speech in participants who were able to speak didn’t guarantee that the technology would work in a person whose vocal tract is paralyzed. “Our models needed to learn the mapping between complex brain activity patterns and intended speech,” said Moses. “That poses a major challenge when the participant can’t speak.”

In addition, the team didn’t know whether brain signals controlling the vocal tract would still be intact for people who haven’t been able to move their vocal muscles for many years. “The best way to find out whether this could work was to try it,” said Moses.

The First 50 Words

To investigate the potential of this technology in patients with paralysis, Chang partnered with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to launch a study known as “BRAVO” (Brain-Computer Interface Restoration of Arm and Voice). The first participant in the trial is a man in his late 30s who suffered a devastating brainstem stroke more than 15 years ago that severely damaged the connection between his brain and his vocal tract and limbs. Since his injury, he has had extremely limited head, neck, and limb movements, and communicates by using a pointer attached to a baseball cap to poke letters on a screen.

The participant, who asked to be referred to as BRAVO1, worked with the researchers to create a 50-word vocabulary that Chang’s team could recognize from brain activity using advanced computer algorithms. The vocabulary – which includes words such as “water,” “family,” and “good” – was sufficient to create hundreds of sentences expressing concepts applicable to BRAVO1’s daily life.

For the study, Chang surgically implanted a high-density electrode array over BRAVO1’s speech motor cortex. After the participant’s full recovery, his team recorded 22 hours of neural activity in this brain region over 48 sessions and several months. In each session, BRAVO1 attempted to say each of the 50 vocabulary words many times while the electrodes recorded brain signals from his speech cortex.

Translating Attempted Speech into Text

To translate the patterns of recorded neural activity into specific intended words, Moses’s two co-lead authors, Sean Metzger and Jessie Liu, both bioengineering graduate students in the Chang Lab, used custom neural network models, which are forms of artificial intelligence. When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

To test their approach, the team first presented BRAVO1 with short sentences constructed from the 50 vocabulary words and asked him to try saying them several times. As he made his attempts, the words were decoded from his brain activity, one by one, on a screen.

Then the team switched to prompting him with questions such as “How are you today?” and “Would you like some water?” As before, BRAVO1’s attempted speech appeared on the screen. “I am very good,” and “No, I am not thirsty.”

Chang and Moses found that the system was able to decode words from brain activity at rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). Contributing to the success was a language model Moses applied that implemented an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.

Moses characterized the early trial results as a proof of principle. “We were thrilled to see the accurate decoding of a variety of meaningful sentences,” he said. “We’ve shown that it is actually possible to facilitate communication in this way and that it has potential for use in conversational settings.”

Looking forward, Chang and Moses said they will expand the trial to include more participants affected by severe paralysis and communication deficits. The team is currently working to increase the number of words in the available vocabulary, as well as improve the rate of speech.

Both said that while the study focused on a single participant and a limited vocabulary, those limitations don’t diminish the accomplishment. “This is an important technological milestone for a person who cannot communicate naturally,” said Moses, “and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”

… all of UCSF. Funding sources [emphasis mine] included National Institutes of Health (U01 NS098971-01), philanthropy, and a sponsored research agreement with Facebook Reality Labs (FRL), [emphasis mine] which completed in early 2021.

UCSF researchers conducted all clinical trial design, execution, data analysis and reporting. Research participant data were collected solely by UCSF, are held confidentially, and are not shared with third parties. FRL provided high-level feedback and machine learning advice.

Here’s a link to and a citation for the paper,

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria by David A. Moses, Ph.D., Sean L. Metzger, M.S., Jessie R. Liu, B.S., Gopala K. Anumanchipalli, Ph.D., Joseph G. Makin, Ph.D., Pengfei F. Sun, Ph.D., Josh Chartier, Ph.D., Maximilian E. Dougherty, B.A., Patricia M. Liu, M.A., Gary M. Abrams, M.D., Adelyn Tu-Chan, D.O., Karunesh Ganguly, M.D., Ph.D., and Edward F. Chang, M.D. N Engl J Med 2021; 385:217-227 DOI: 10.1056/NEJMoa2027540 Published July 15, 2021

This paper is mostly behind a paywall but you do have this option: “Create your account to get 2 free subscriber-only articles each month.”

*Sept. 4, 2023 I have made a few minor corrections (a) removed an extra space (b) removed an extra ‘a’.

CRISPR-Cas12a as a new diagnostic tool

Similar to Cas9, Cas12a is has an added feature as noted in this February 15, 2018 news item on ScienceDaily,

Utilizing an unsuspected activity of the CRISPR-Cas12a protein, researchers created a simple diagnostic system called DETECTR to analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance and also diagnose bacterial and viral infections. The scientists discovered that when Cas12a binds its double-stranded DNA target, it indiscriminately chews up all single-stranded DNA. They then created reporter molecules attached to single-stranded DNA to signal when Cas12a finds its target.

A February 15, 2018 University of California at Berkeley (UC Berkeley) news release by Robert Sanders and which originated the news item, provides more detail and history,

CRISPR-Cas12a, one of the DNA-cutting proteins revolutionizing biology today, has an unexpected side effect that makes it an ideal enzyme for simple, rapid and accurate disease diagnostics.

blood in test tube

(iStock)

Cas12a, discovered in 2015 and originally called Cpf1, is like the well-known Cas9 protein that UC Berkeley’s Jennifer Doudna and colleague Emmanuelle Charpentier turned into a powerful gene-editing tool in 2012.

CRISPR-Cas9 has supercharged biological research in a mere six years, speeding up exploration of the causes of disease and sparking many potential new therapies. Cas12a was a major addition to the gene-cutting toolbox, able to cut double-stranded DNA at places that Cas9 can’t, and, because it leaves ragged edges, perhaps easier to use when inserting a new gene at the DNA cut.

But co-first authors Janice Chen, Enbo Ma and Lucas Harrington in Doudna’s lab discovered that when Cas12a binds and cuts a targeted double-stranded DNA sequence, it unexpectedly unleashes indiscriminate cutting of all single-stranded DNA in a test tube.

Most of the DNA in a cell is in the form of a double-stranded helix, so this is not necessarily a problem for gene-editing applications. But it does allow researchers to use a single-stranded “reporter” molecule with the CRISPR-Cas12a protein, which produces an unambiguous fluorescent signal when Cas12a has found its target.

“We continue to be fascinated by the functions of bacterial CRISPR systems and how mechanistic understanding leads to opportunities for new technologies,” said Doudna, a professor of molecular and cell biology and of chemistry and a Howard Hughes Medical Institute investigator.

DETECTR diagnostics

The new DETECTR system based on CRISPR-Cas12a can analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance as well as diagnose bacterial and viral infections. Target DNA is amplified by RPA to make it easier for Cas12a to find it and bind, unleashing indiscriminate cutting of single-stranded DNA, including DNA attached to a fluorescent marker (gold star) that tells researchers that Cas12a has found its target.

The UC Berkeley researchers, along with their colleagues at UC San Francisco, will publish their findings Feb. 15 [2018] via the journal Science’s fast-track service, First Release.

The researchers developed a diagnostic system they dubbed the DNA Endonuclease Targeted CRISPR Trans Reporter, or DETECTR, for quick and easy point-of-care detection of even small amounts of DNA in clinical samples. It involves adding all reagents in a single reaction: CRISPR-Cas12a and its RNA targeting sequence (guide RNA), fluorescent reporter molecule and an isothermal amplification system called recombinase polymerase amplification (RPA), which is similar to polymerase chain reaction (PCR). When warmed to body temperature, RPA rapidly multiplies the number of copies of the target DNA, boosting the chances Cas12a will find one of them, bind and unleash single-strand DNA cutting, resulting in a fluorescent readout.

The UC Berkeley researchers tested this strategy using patient samples containing human papilloma virus (HPV), in collaboration with Joel Palefsky’s lab at UC San Francisco. Using DETECTR, they were able to demonstrate accurate detection of the “high-risk” HPV types 16 and 18 in samples infected with many different HPV types.

“This protein works as a robust tool to detect DNA from a variety of sources,” Chen said. “We want to push the limits of the technology, which is potentially applicable in any point-of-care diagnostic situation where there is a DNA component, including cancer and infectious disease.”

The indiscriminate cutting of all single-stranded DNA, which the researchers discovered holds true for all related Cas12 molecules, but not Cas9, may have unwanted effects in genome editing applications, but more research is needed on this topic, Chen said. During the transcription of genes, for example, the cell briefly creates single strands of DNA that could accidentally be cut by Cas12a.

The activity of the Cas12 proteins is similar to that of another family of CRISPR enzymes, Cas13a, which chew up RNA after binding to a target RNA sequence. Various teams, including Doudna’s, are developing diagnostic tests using Cas13a that could, for example, detect the RNA genome of HIV.

infographic about DETECTR system

(Infographic by the Howard Hughes Medical Institute)

These new tools have been repurposed from their original role in microbes where they serve as adaptive immune systems to fend off viral infections. In these bacteria, Cas proteins store records of past infections and use these “memories” to identify harmful DNA during infections. Cas12a, the protein used in this study, then cuts the invading DNA, saving the bacteria from being taken over by the virus.

The chance discovery of Cas12a’s unusual behavior highlights the importance of basic research, Chen said, since it came from a basic curiosity about the mechanism Cas12a uses to cleave double-stranded DNA.

“It’s cool that, by going after the question of the cleavage mechanism of this protein, we uncovered what we think is a very powerful technology useful in an array of applications,” Chen said.

Here’s a link to and a citation for the paper,

CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity by Janice S. Chen, Enbo Ma, Lucas B. Harrington, Maria Da Costa, Xinran Tian, Joel M. Palefsky, Jennifer A. Doudna. Science 15 Feb 2018: eaar6245 DOI: 10.1126/science.aar6245

This paper is behind a paywall.