Tag Archives: Brain–computer interface control with artificial intelligence copilots

Wearable, noninvasive brain-computer interface system with AI co-pilot

A September 1, 2025 news item on Scienmag announces an advance for noninvasive brain-computer interfaces (BCIs)

UCLA engineers have achieved a remarkable breakthrough in the field of brain-computer interface (BCI) technology by developing a wearable, noninvasive system that employs artificial intelligence (AI) as a co-pilot. This innovative approach aims to decode user intentions and facilitate the operation of devices such as robotic arms or computer cursors, thereby enhancing the quality of life for individuals with limited physical capabilities. Preliminary results indicate that this novel AI-BCI system not only offers significant improvements in task completion speed but also has the potential to enable greater independence for people suffering from paralysis and other neurological conditions.

The study, which is set to be published in the highly esteemed journal Nature Machine Intelligence, offers insights into the unprecedented performance levels of noninvasive BCI systems. This marks a substantial advancement in a field that has historically relied on invasive surgical procedures to translate brain signals into actionable commands. UCLA’s approach aims to mitigate the risks and costs associated with such surgeries, providing a more accessible option for individuals with disabilities. In the long run, the researchers envision a future where AI-BCI systems are commonplace, allowing those with movement disorders to regain autonomy in their daily lives.

A September 1, 2025 University of California – Los Angeles news release (also on EurekAlert), which has a less exuberant tone but originated the news item, provides more detail, Note: A link has been removed,

The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions. They paired the decoded signals with a camera-based artificial intelligence platform that interprets user direction and intent in real time. The system allows individuals to complete tasks significantly faster than without AI assistance.

“By using artificial intelligence to complement brain-computer interface systems, we’re aiming for much less risky and invasive avenues,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”

State-of-the-art, surgically implanted BCI devices can translate brain signals into commands, but the benefits they currently offer are outweighed by the risks and costs associated with neurosurgery to implant them. More than two decades after they were first demonstrated, such devices are still limited to small pilot clinical trials. Meanwhile, wearable and other external BCIs have demonstrated a lower level of performance in detecting brain signals reliably. 

To address these limitations, the researchers tested their new noninvasive AI-assisted BCI with four participants — three without motor impairments and a fourth who was paralyzed from the waist down. Participants wore a head cap to record EEG, and the researchers used custom decoder algorithms to translate these brain signals into movements of a computer cursor and robotic arm. Simultaneously, an AI system with a built-in camera observed the decoded movements and helped participants complete two tasks.

In the first task, they were instructed to move a cursor on a computer screen to hit eight targets, holding the cursor in place at each for at least half a second. In the second challenge, participants were asked to activate a robotic arm to move four blocks on a table from their original spots to designated positions. 

All participants completed both tasks significantly faster with AI assistance. Notably, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes with AI assistance, whereas without it, he was unable to complete the task.

The BCI deciphered electrical brain signals that encoded the participants’ intended actions. Using a computer vision system, the custom-built AI inferred the users’ intent — not their eye movements — to guide the cursor and position the blocks.

“Next steps for AI-BCI systems could include the development of more advanced co-pilots that move robotic arms with more speed and precision, and offer a deft touch that adapts to the object the user wants to grasp,” said co-lead author Johannes Lee, a UCLA electrical and computer engineering doctoral candidate advised by Kao. “And adding in larger-scale training data could also help the AI collaborate on more complex tasks, as well as improve EEG decoding itself.”

The paper’s authors are all members of Kao’s Neural Engineering and Computation Lab, including Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu and Chang Xie. A member of the UCLA Brain Research Institute, Kao also holds faculty appointments in the Computer Science Department and the Interdepartmental Ph.D. Program in Neuroscience.

The research was funded by the National Institutes of Health and the Science Hub for Humanity and Artificial Intelligence, which is a collaboration between UCLA and Amazon. The UCLA Technology Development Group has applied for a patent related to the AI-BCI technology. 

Here’s a link to and a citation for the paper,

Brain–computer interface control with artificial intelligence copilots by Johannes Y. Lee, Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu, Chang Xie & Jonathan C. Kao. Nature Machine Intelligence volume 7, pages 1510–1523 (2025) DOI: https://doi.org/10.1038/s42256-025-01090-y Published: 01 September 2025 Version of record: 01 September 2025 Issue date: September 2025

This paper is behind a paywall.