Tag Archives: Jianshi Tang

Memristor-based brain-computer interfaces (BCIs)

Brief digression: For anyone unfamiliar with memristors, they are, for want of better terms, devices or elements that have memory in addition to their resistive properties. (For more see: R Jagan Mohan Rao’s undated article ‘What is a Memristor? Principle, Advantages, Applications” on InsstrumentalTools.com)

A March 27,2025 news item on ScienceDaily announces a memristor-enhanced brain-computer interface (BCI),

Summary: Researchers have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). This research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve [emphasis mine] with changing brain signals.

So, the decoder in the BCI will ‘co-evolve’ with your brain? hmmm Also, where is this ‘memristor chip’? The video demo (https://assets-eu.researchsquare.com/files/rs-3966063/v1/7a84dc7037b11bad96ae0378.mp4) shows a volunteer wearing cap attached by cable to an intermediary device (an enlarged chip with a brain on it?) which is in turn attached to a screen. I believe some artistic licence has been taken with regard to the brain on the chip..

Caption: Researchers propose an adaptive neuromorphic decoder supporting brain-machine co-evolution. Credit: The University of Hong Kong

A March 25, 2025 University of Hong Kong (HKU) press release (also on EurekAlert but published on March 26, 2025), which originated the news item, explains more about memristors, BCIs, and co-evolution,

Professor Ngai Wong and Dr Zhengwu Liu from the Department of Electrical and Electronic Engineering at the Faculty of Engineering at the University of Hong Kong (HKU), in collaboration with research teams at Tsinghua University and Tianjin University, have conducted groundbreaking research on memristor-based brain-computer interfaces (BCIs). Published in Nature Electronics, this research presents an innovative approach for implementing energy-efficient adaptive neuromorphic decoders in BCIs that can effectively co-evolve with changing brain signals.

A brain-computer interface (BCI) is a computer-based system that creates a direct communication pathway between the brain and external devices, such as computers, allowing individuals to control these devices or applications purely through brain activity, bypassing the need for traditional muscle movements or the nervous system. This technology holds immense potential across a wide range of fields, from assistive technologies to neurological rehabilitation. However, traditional BCIs still face challenges.

“The brain is a complex dynamic system with signals that constantly evolve and fluctuate. This poses significant challenges for BCIs to maintain stable performance over time,” said Professor Wong and Dr Liu. “Additionally, as brain-machine links grow in complexity, traditional computing architectures struggle with real-time processing demands.”

The collaborative research addressed these challenges by developing a 128K-cell memristor chip that serves as an adaptive brain signal decoder. The team introduced a hardware-efficient one-step memristor decoding strategy that significantly reduces computational complexity while maintaining high accuracy. Dr Liu, a Research Assistant Professor in the Department of Electrical and Electronic Engineering at HKU, contributed as a co-first author to this groundbreaking work.

In real-world testing, the system demonstrated impressive capabilities in a four-degree-of-freedom drone flight control task, achieving 85.17% decoding accuracy—equivalent to software-based methods—while consuming 1,643 times less energy and offering 216 times higher normalised speed than conventional CPU-based systems.

Most significantly, the researchers developed an interactive update framework that enables the memristor decoder and brain signals to adapt to each other naturally. This co-evolution, demonstrated in experiments involving ten participants over six-hour sessions, resulted in approximately 20% higher accuracy compared to systems without co-evolution capability.

“Our work on optimising the computational models and error mitigation techniques was crucial to ensure that the theoretical advantages of memristor technology could be realised in practical BCI applications,” explained Dr Liu. “The one-step decoding approach we developed together significantly reduces both computational complexity and hardware costs, making the technology more accessible for a wide range of practical scenarios.”

Professor Wong further emphasised, “More importantly, our interactive updating framework enables co-evolution between the memristor decoder and brain signals, addressing the long-term stability issues faced by traditional BCIs. This co-evolution mechanism allows the system to adapt to natural changes in brain signals over time, greatly enhancing decoding stability and accuracy during prolonged use.”

Building on the success of this research, the team is now expanding their work through a new collaboration with HKU Li Ka Shing Faculty of Medicine and Queen Mary Hospital to develop a multimodal large language model for epilepsy data analysis.

“This new collaboration aims to extend our work on brain signal processing to the critical area of epilepsy diagnosis and treatment,” said Professor Wong and Dr Liu. “By combining our expertise in advanced algorithms and neuromorphic computing with clinical data and expertise, we hope to develop more accurate and efficient models to assist epilepsy patients.”

The research represents a significant step forward in human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems, opening new possibilities for medical applications, rehabilitation technologies, and human-machine interaction.

The project received support from the RGC Theme-based Research Scheme (TRS) project T45-701/22-R, the STI 2030-Major Projects, the National Natural Science Foundation of China, and the XPLORER Prize.

Here’s a link to and a citation for the paper,

A memristor-based adaptive neuromorphic decoder for brain–computer interfaces by Zhengwu Liu, Jie Mei, Jianshi Tang, Minpeng Xu, Bin Gao, Kun Wang, Sanchuang Ding, Qi Liu, Qi Qin, Weize Chen, Yue Xi, Yijun Li, Peng Yao, Han Zhao, Ngai Wong, He Qian, Bo Hong, Tzyy-Ping Jung, Dong Ming & Huaqiang Wu. Nature Electronics volume 8, pages 362–372 (2025) DOI: https://doi.org/10.1038/s41928-025-01340-2 Published online: 17 February 2025 Issue Date: April 2025

This paper is behind a paywall.

Words from the press release like “… human-centred hybrid intelligence, which combines biological brains with neuromorphic computing systems …” put me in mind of cyborgs.

IBM, the Cognitive Era, and carbon nanotube electronics

IBM has a storied position in the field of nanotechnology due to the scanning tunneling microscope developed in the company’s laboratories. It was a Nobel Prize-winning breakthough which provided the impetus for nanotechnology applied research. Now, an Oct. 1, 2015 news item on Nanowerk trumpets another IBM breakthrough,

IBM Research today [Oct. 1, 2015] announced a major engineering breakthrough that could accelerate carbon nanotubes replacing silicon transistors to power future computing technologies.

IBM scientists demonstrated a new way to shrink transistor contacts without reducing performance of carbon nanotube devices, opening a pathway to dramatically faster, smaller and more powerful computer chips beyond the capabilities of traditional semiconductors.

While the Oct. 1, 2015 IBM news release, which originated the news item, does go on at length there’s not much technical detail (see the second to last paragraph in the excerpt for the little they do include) about the research breakthrough (Note: Links have been removed),

IBM’s breakthrough overcomes a major hurdle that silicon and any semiconductor transistor technologies face when scaling down. In any transistor, two things scale: the channel and its two contacts. As devices become smaller, increased contact resistance for carbon nanotubes has hindered performance gains until now. These results could overcome contact resistance challenges all the way to the 1.8 nanometer node – four technology generations away. [emphasis mine]

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling Big Data to be analyzed faster, increasing the power and battery life of mobile devices and the Internet of Things, and allowing cloud data centers to deliver services more efficiently and economically.

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. With Moore’s Law running out of steam, shrinking the size of the transistor – including the channels and contacts – without compromising performance has been a vexing challenge troubling researchers for decades.

IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today’s leading silicon technology. IBM’s new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

Earlier this summer, IBM unveiled the first 7 nanometer node silicon test chip [emphasis mine], pushing the limits of silicon technologies and ensuring further innovations for IBM Systems and the IT industry. By advancing research of carbon nanotubes to replace traditional silicon devices, IBM is paving the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.

“These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems,” said Dario Gil, vice president of Science & Technology at IBM Research. “As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected.”

A New Contact for Carbon Nanotubes

Carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

Electrons in carbon transistors can move more easily than in silicon-based devices, and the ultra-thin body of carbon nanotubes provide additional advantages at the atomic scale. Inside a chip, contacts are the valves that control the flow of electrons from metal into the channels of a semiconductor. As transistors shrink in size, electrical resistance increases within the contacts, which impedes performance. Until now, decreasing the size of the contacts on a device caused a commensurate drop in performance – a challenge facing both silicon and carbon nanotube transistor technologies.

IBM researchers had to forego traditional contact schemes and invented a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This ‘end-bonded contact scheme’ allows the contacts to be shrunken down to below 10 nanometers without deteriorating performance of the carbon nanotube devices.

“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” Gil added. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology within the decade.”

Every once in a while, the size gets to me and a 1.8nm node is amazing. As for IBM’s 7nm chip, which was previewed this summer, there’s more about that in my July 15, 2015 posting.

Here’s a link to and a citation for the IBM paper,

End-bonded contacts for carbon nanotube transistors with low, size-independent resistance by Qing Cao, Shu-Jen Han, Jerry Tersoff, Aaron D. Franklin†, Yu Zhu, Zhen Zhang‡, George S. Tulevski, Jianshi Tang, and Wilfried Haensch. Science 2 October 2015: Vol. 350 no. 6256 pp. 68-72 DOI: 10.1126/science.aac8006

This paper is behind a paywall.