Tag Archives: Dan Vahaba

Communicating thoughts by means of brain implants?

The Australian military announced mind-controlled robots in Spring 2023 (see my June 13, 2023 posting) and, recently, scientists at Duke University (North Carolina, US) have announced research that may allow people who are unable to speak to communicate their thoughts, from a November 6, 2023 news item on ScienceDaily,

A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.

Appearing Nov. 6 [2023] in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.

One more plastic brain for this blog,

Caption: A device no bigger than a postage stamp (dotted portion within white band) packs 128 microscopic sensors that can translate brain cell activity into what someone intends to say. Credit: Dan Vahaba/Duke University

A November 6, 2023 Duke University news release (also on EurekAlert), which originated the news item, provides more detail, Note: Links have been removed,

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at about 78 words per minute. People, however, speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays atop the surface of the brain. Fewer sensors provide less decipherable information to decode.

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.

For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it’s necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.

After fabricating the new implant, Cogan and Viventi teamed up with several Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as  treating Parkinson’s disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word “gak,”  the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.

Duraivel and his mentors are excited about making a cordless version of the device with a recent $2.4M grant from the National Institutes of Health.

“We’re now developing the same kind of recording devices, but without any wires,” Cogan said. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

While their work is encouraging, there’s still a long way to go for Viventi and Cogan’s speech prosthetic to hit the shelves anytime soon.

“We’re at the point where it’s still much slower than natural speech,” Viventi said in a recent Duke Magazine piece about the technology, “but you can see the trajectory where you might be able to get there.”

Here’s a link to and a citation for the paper,

High-resolution neural recordings improve the accuracy of speech decoding by Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi & Gregory B. Cogan. Nature Communications volume 14, Article number: 6938 (2023) DO: Ihttps://doi.org/10.1038/s41467-023-42555-1 Published: 06 November 2023

This paper is open access.

After pretending to be Marie Curie girls stick with science

Researchers have found that pretending to be Marie Curie in a science game can lead to greater persistence when playing. From a September 27, 2022 Duke University news release (also on EurekAlert but published on September 29, 2022) by Dan Vahaba,

Fake it ‘til you make is true for children too, it turns out: Young girls embracing the role of a successful female scientist, like Marie Curie, persist longer at a challenging science game.

A new study, appearing Sept. 28 [2022] in the journal Psychological Science, suggests that science role-playing may help tighten the gender gap in science, technology, engineering, and math (STEM) education and careers for women simply by improving their identity as scientists.

Frustrated by the gender gap in STEM, in which some fields employ at least three times more men than women, Cornell graduate student Reut Shachnai wanted to do something about it. Shachnai, who is now continuing her studies at Yale, said the idea to help foster young girls’ interest in science came to her during a lecture in a class she was taking on “Psychology of Imagination.”

“We read a paper on how children pretending to be a superhero did better at self-control tasks (the so-called ‘Batman effect’),” said Tamar Kushnir, Ph.D., who taught the class and is now a Duke professor of psychology & neuroscience as well as a fellow author on the new paper. “Reut wondered if this would also work to encourage girls to persist in science.”

Along with Lin Bian, Ph.D., an assistant professor of psychology at the University of Chicago, Shachnai and Kushnir devised an experiment to test if assuming the role of a successful scientist would improve girls’ persistence in a “sink or float” science game.

The game itself was simple yet challenging: a computer screen projected a slide with an object in the center hovering above a pool of water. Kids then had to predict whether that object — be it an anchor, basketball, balloon, or others — would sink or float. After making their choice, they learned if they made the right choice as they watched the object either plunge or stay afloat.

The researchers recruited 240 four- to seven-year-olds for the experiment, because this is around the time kids first develop their sense of identity and capabilities.

“Children as early as age 6 start to think boys are smarter and better at science than girls,” said Bian, whose previous work identified this critical period.

Boys and girls were assigned to three different groups: the baseline group were told they would be scientists for the day and then got to play the game.

Children in the “story” group received the same information, but also learned about the successes and struggles of a gender-matched scientist before playing the game. Boys heard about Isaac Newton, and girls were told about Marie Curie. They also had to take a two-question pop quiz after the story to make sure they were paying attention (they were).

Finally, children in the “pretend” group did all the same things as the “story” group, with one important twist: these children were told to assume the identity of the scientist they just learned about, and were referred to as such during the game (“What’s your prediction, Dr. Marie?”).

All kids played at least one round of the game, after which they were asked if they wanted to play more or do something else. Once the kids tapped out, they were asked to rate how good they thought they were at the game and as a scientist.

No matter what group they were in, girls got the answers right just as often as boys — nearly 70% of the time. Boys, however didn’t really benefit from the stories or make-believe.

“Boys were kind of maxed out,” Kushnir said. “They were about at ceiling performance no matter what we did.”

Girls, on the other hand, benefited immensely from playing pretend.

Without being exposed to Marie Curie, girls called it quits after six trials. However, girls pretending to be Dr. Marie persisted twice as long at the sink-or-float game, playing just as much as the boys did (about 12 trials on average).

While there wasn’t much benefit to just hearing a story about Marie Curie for extending game play, it did boost girls’ ratings of themselves as science gamers.

Kushnir and her colleagues’ work poses many new questions for researchers, such as if children assuming the role of successful scientists matched by race and ethnicity might also benefit (the participants were mostly white in this study).

“Our findings suggest that we may want to take representation one step further,” Shachnai said. “Rather than merely hearing about role models, children may benefit from actively performing the type of actions they see role models perform. In other words, taking a few steps in the role model’s shoes, instead of merely observing her walk.”

A screen grab from the game,

Caption: Participants played a sink-or-float game on the computer during the study.. Credit:: Reut Shachnai, Tamar Kushnir, and Lin Bian https://osf.io/qfjk9

Here’s a link to and a citation for the paper,

Walking In Her Shoes: Pretending To Be a Female Role Model Increases Young Girls’ Persistence in Science by Shachnai, Reut, Kushnir, Tamar, Bian, Lin. Psychological Science DOI: 10.1177/09567976221119393 First published online: Sept. 28, 2022

This paper is behind a paywall.