Tag Archives: University of California at Los Angeles (UCLA)

Wearable, noninvasive brain-computer interface system with AI co-pilot

A September 1, 2025 news item on Scienmag announces an advance for noninvasive brain-computer interfaces (BCIs)

UCLA engineers have achieved a remarkable breakthrough in the field of brain-computer interface (BCI) technology by developing a wearable, noninvasive system that employs artificial intelligence (AI) as a co-pilot. This innovative approach aims to decode user intentions and facilitate the operation of devices such as robotic arms or computer cursors, thereby enhancing the quality of life for individuals with limited physical capabilities. Preliminary results indicate that this novel AI-BCI system not only offers significant improvements in task completion speed but also has the potential to enable greater independence for people suffering from paralysis and other neurological conditions.

The study, which is set to be published in the highly esteemed journal Nature Machine Intelligence, offers insights into the unprecedented performance levels of noninvasive BCI systems. This marks a substantial advancement in a field that has historically relied on invasive surgical procedures to translate brain signals into actionable commands. UCLA’s approach aims to mitigate the risks and costs associated with such surgeries, providing a more accessible option for individuals with disabilities. In the long run, the researchers envision a future where AI-BCI systems are commonplace, allowing those with movement disorders to regain autonomy in their daily lives.

A September 1, 2025 University of California – Los Angeles news release (also on EurekAlert), which has a less exuberant tone but originated the news item, provides more detail, Note: A link has been removed,

The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions. They paired the decoded signals with a camera-based artificial intelligence platform that interprets user direction and intent in real time. The system allows individuals to complete tasks significantly faster than without AI assistance.

“By using artificial intelligence to complement brain-computer interface systems, we’re aiming for much less risky and invasive avenues,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”

State-of-the-art, surgically implanted BCI devices can translate brain signals into commands, but the benefits they currently offer are outweighed by the risks and costs associated with neurosurgery to implant them. More than two decades after they were first demonstrated, such devices are still limited to small pilot clinical trials. Meanwhile, wearable and other external BCIs have demonstrated a lower level of performance in detecting brain signals reliably. 

To address these limitations, the researchers tested their new noninvasive AI-assisted BCI with four participants — three without motor impairments and a fourth who was paralyzed from the waist down. Participants wore a head cap to record EEG, and the researchers used custom decoder algorithms to translate these brain signals into movements of a computer cursor and robotic arm. Simultaneously, an AI system with a built-in camera observed the decoded movements and helped participants complete two tasks.

In the first task, they were instructed to move a cursor on a computer screen to hit eight targets, holding the cursor in place at each for at least half a second. In the second challenge, participants were asked to activate a robotic arm to move four blocks on a table from their original spots to designated positions. 

All participants completed both tasks significantly faster with AI assistance. Notably, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes with AI assistance, whereas without it, he was unable to complete the task.

The BCI deciphered electrical brain signals that encoded the participants’ intended actions. Using a computer vision system, the custom-built AI inferred the users’ intent — not their eye movements — to guide the cursor and position the blocks.

“Next steps for AI-BCI systems could include the development of more advanced co-pilots that move robotic arms with more speed and precision, and offer a deft touch that adapts to the object the user wants to grasp,” said co-lead author Johannes Lee, a UCLA electrical and computer engineering doctoral candidate advised by Kao. “And adding in larger-scale training data could also help the AI collaborate on more complex tasks, as well as improve EEG decoding itself.”

The paper’s authors are all members of Kao’s Neural Engineering and Computation Lab, including Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu and Chang Xie. A member of the UCLA Brain Research Institute, Kao also holds faculty appointments in the Computer Science Department and the Interdepartmental Ph.D. Program in Neuroscience.

The research was funded by the National Institutes of Health and the Science Hub for Humanity and Artificial Intelligence, which is a collaboration between UCLA and Amazon. The UCLA Technology Development Group has applied for a patent related to the AI-BCI technology. 

Here’s a link to and a citation for the paper,

Brain–computer interface control with artificial intelligence copilots by Johannes Y. Lee, Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu, Chang Xie & Jonathan C. Kao. Nature Machine Intelligence volume 7, pages 1510–1523 (2025) DOI: https://doi.org/10.1038/s42256-025-01090-y Published: 01 September 2025 Version of record: 01 September 2025 Issue date: September 2025

This paper is behind a paywall.

15th-century Inca building constructed for sound

Carpa uasi. The carpa uasi was the bottom level of this building; it originally ended to the left of the arch (near the right side of the floor level). The 15th-century structure survived because the church built over and around it lent stability. Credit: Stella Nair Courtesy: University of California at Los Angeles (UCLA)

This October 21, 2025 University of California at Los Angeles (UCLA) news release by Sean Brenner tells a fascinating story about sound and architecture, Note: Links have been removed,

Key takeaways

  • UCLA art history professor Stella Nair is collaborating with an interdisciplinary team analyzing a unique Inca building that dates to the mid-15th century.
  • The building, in the remote town of Huaytará, Peru, appears to have been constructed specifically for the purpose of amplifying music and sound, with three walls and an opening at one end.
  • The study is important in part because scholars tend to focus on visual evidence when analyzing cultures of the past, but understanding the role of sound can create a more three-dimensional picture.

The Inca empire is renowned for its architecture; its buildings were intricately designed and extraordinarily durable.

But this summer, it was another aspect of Inca construction that captured the attention of Stella Nair, a UCLA associate professor of art history whose expertise is Indigenous arts and architecture of the Americas.

Nair spent three weeks in the remote town of Huaytará, Peru, studying a single Inca building that appears to have been created primarily to amplify sound and music. Known as a carpa uasi, the structure was likely built in the mid-15th century.

“We’re learning that sound was incredibly important from the earliest cities on, dating back several thousand years B.C.,” said Nair, who is working on her third book about Andean (in and around the Andes mountains) architecture. “Builders were incredibly sophisticated with their aural architecture, and the Incas are one part of this long, sophisticated tradition of sonic engineering.”

One of a kind

Nair said the structure is the only known carpa uasi in existence, and although scholars have known about it for many years, the building hasn’t been extensively researched — and no previous studies had identified its potential for amplifying sound.

One of its distinctive characteristics is that, because of its intended use, the carpa uasi was built with only three walls, with an opening at one of the gable ends. (The phrase carpa uasi means “tent house,” a reference to that open-ended structure.) Nair and her colleagues theorize that the design would have made it possible for sound — such as drums being used to announce the beginning or end of a battle — to be focused toward the building’s open end and then out to the surrounding environment.

“Many people look at Inca architecture and are impressed with the stonework, but that’s just the tip of the iceberg,” Nair said. “They were also concerned with the ephemeral, temporary and impermanent, and sound was one of those things.

Sound was deeply valued and an incredibly important part of Andean and Inca architecture — so much so that the builders allowed some instability in this structure just because of its acoustic potential.” [emphasis mine]

The partially open structure would have made such buildings significantly less stable than most other Inca buildings. Ironically, Nair said, this carpa uasi has survived for centuries because, perhaps at the direction of Spanish settlers, a church was later built on top of it, stabilizing the structure below.

Nair is collaborating on the project with a team of acoustic experts led by Stanford University music professor Jonathan Berger. Nair primarily studied the carpa uasi’s architecture, taking measurements and making drawings and photographs. Next, she will use hand drawings and 3-D modeling to determine what the roof may have looked like and how the building’s overall form influenced its function. Together, the researchers expect to produce a model for how sound would have traveled through and outside the building.

Toward a more complete understanding

“We’re exploring the possibility that the carpa uasi may have amplified low-frequency sounds, such as drumming, with minimal reverberation,” Nair said. “With this research, for the first time, we’ll be able to tell what the Incas valued sonically in this building.”

Investigating the sonic properties of a 600-year-old building in the Andes is much more than an academic exercise for Nair and her collaborators — and not only because it is the only surviving example of its kind.

“Sound studies are really critical, because we tend to emphasize the visual in how we understand the world around us, including our past,” Nair said. “But that’s not how we experience life — all of our senses are critical. So how we understand ourselves and our history changes if you put sound back into the conversation.”

Nair said the project reflects the importance of collaboration across disciplines, institutions and borders. The American scholars also benefited from the cooperation of partners in Peru, including the priest who oversees the Church of San Juan Bautista, the building whose architecture incorporates the carpa uasi, and a local archaeologist.

Nair’s work was funded in part by a grant from the UCLA College Division of Humanities; Berger received funding from the Templeton Religion Trust.

Ella Feldman’s October 30, 2025 article for the Smithsonian magazine enhances the ‘sound’ story with a few more details about the Inca empire. There’s also more about Stella Nair and her work on her UCLA bio webpage.

A couple of proposed solutions to AI’s insatiable need for power?

I have two stories about research into making artificial intelligence (AI) less wasteful of power. One is from the International Society for Optics and Photonics (SPIE) and the other from the Politecnico di Milano (Polytechnic of Milan).

International Society for Optics and Photonics (SPIE)

A September 9, 2025 news item on ScienceDaily announced a more energy efficient AI chip,

Artificial intelligence (AI) systems are increasingly central to technology, powering everything from facial recognition to language translation. But as AI models grow more complex, they consume vast amounts of electricity — posing challenges for energy efficiency and sustainability. A new chip developed by researchers at the University of Florida could help address this issue by using light, rather than just electricity, to perform one of AI’s most power-hungry tasks. Their research is reported in Advanced Photonics.

A September 8, 2025 SPIE (International Society for Optics and Photonics) press release, which originated the news item, provides more detail about the work, Note: Links have been removed,

The chip is designed to carry out convolution operations, a core function in machine learning that enables AI systems to detect patterns in images, video, and text. These operations typically require significant computing power. By integrating optical components directly onto a silicon chip, the researchers have created a system that performs convolutions using laser light and microscopic lenses—dramatically reducing energy consumption and speeding up processing.

“Performing a key machine learning computation at near zero energy is a leap forward for future AI systems,” said study leader Volker J. Sorger, the Rhines Endowed Professor in Semiconductor Photonics at the University of Florida. “This is critical to keep scaling up AI capabilities in years to come.”

In tests, the prototype chip classified handwritten digits with about 98 percent accuracy, comparable to traditional electronic chips. The system uses two sets of miniature Fresnel lenses—flat, ultrathin versions of the lenses found in lighthouses—fabricated using standard semiconductor manufacturing techniques. These lenses are narrower than a human hair and are etched directly onto the chip.

To perform a convolution, machine learning data is first converted into laser light on the chip. The light passes through the Fresnel lenses, which carry out the mathematical transformation. The result is then converted back into a digital signal to complete the AI task.

“This is the first time anyone has put this type of optical computation on a chip and applied it to an AI neural network,” said Hangbo Yang, a research associate professor in Sorger’s group at UF and co-author of the study.

The team also demonstrated that the chip could process multiple data streams simultaneously by using lasers of different colors—a technique known as wavelength multiplexing. “We can have multiple wavelengths, or colors, of light passing through the lens at the same time,” Yang said. “That’s a key advantage of photonics.”

The research was conducted in collaboration with the Florida Semiconductor Institute, UCLA [University of California at Los Angeles], and George Washington University. Sorger noted that chip manufacturers such as NVIDIA already use optical elements in some parts of their AI systems, which could make it easier to integrate this new technology.

“In the near future, chip-based optics will become a key part of every AI chip we use daily,” Sorger said. “And optical AI computing is next.”

There’s also a September 8, 2025 University of Florida news release (also on EurekAlert), which is similar to the one issued by SPIE.

The paper has been published on two different sites; the citation for the paper remains the same and there are links to two different sites hosting the paper,

Near-energy-free photonic Fourier transformation for convolution operation acceleration by Hangbo Yang, Nicola Peserico, Shurui Li, Xiaoxuan Ma, Russell L. T. Schwartz, Mostafa Hosseini, Aydin Babakhani, Chee Wei Wong, Puneet Gupta, Volker J. Sorger SPIE Digital library or Advanced Photonics Vol. 7, Issue 5, 056007 (2025) DOI: 10.1117/1.AP.7.5.056007

Both sites offer open access to the paper.

Politecnico di Milano (Polytechnic of Milan)

Caption: The photonic microchip (below) developed for the study on physical neural networks, along with the electronic chip (above, the yellow one) of control. Credit: Politecnico di Milano, DEIB – Department of Electronics, Information and Bioengineering

A September 12, 2025 Politecnico di Milano (Polytechnic of Milan) press release (also on EurekAlert but published September 9, 2025) announces work into a more energy efficient way to train artificial intelligence, specifically physical neural networks,

Artificial intelligence is now part of our daily lives, with the subsequent pressing need for larger, more complex models. However, the demand for ever-increasing power and computing capacity is rising faster than the performance traditional computers can provide.

To overcome these limitations, research is moving towards innovative technologies such as physical neural networks, analogue circuits that directly exploit the laws of physics (properties of light beams, quantum phenomena) to process information. Their potential is at the heart of the study published by the prestigious journal Nature. It is the outcome of collaboration between several international institutes, including the Politecnico di Milano, the École Polytechnique Fédérale in Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The article entitled “Training of Physical Neural Networks” discusses the steps of research on training physical neural networks, carried out with the collaboration of Francesco Morichetti, professor at DEIB – Department of Electronics, Information and Bioengineering, and head of the university’s Photonic Devices Lab.

Politecnico di Milano contributed to this study by developing photonic chips for the creation of neural networks, exploiting integrated photonic technologies. Mathematical operations, such as sums and multiplications, can now be performed through light interference mechanisms on silicon microchips barely a few square millimetres in size.

By eliminating the operations required for the digitisation of information, our photonic chips allow calculations to be carried out with a significant reduction in both energy consumption and processing time,” says Francesco Morichetti. A step forward to make artificial intelligence (which relies on extremely energy-intensive data centres) more sustainable.

The study published in Nature addresses the theme of training, precisely the phase in which the network learns to perform certain tasks. «With our research within the Department of Electronics, Information and Bioengineering, we have helped develop an “in-situ” training technique for photonic neural networks, i.e. without going through digital models. The procedure is carried out entirely using light signals. Hence, network training will not only be faster, but also more robust and efficient», adds Morichetti.

The use of photonic chips will allow the development of more sophisticated models for artificial intelligence, or devices capable of processing real-time data directly on site – such as autonomous cars or intelligent sensors integrated into portable devices – without requiring remote processing.

Here’s a link to and a citation for the paper,

Training of physical neural networks by Ali Momeni, Babak Rahmani, Benjamin Scellier, Logan G. Wright, Peter L. McMahon, Clara C. Wanjura, Yuhang Li, Anas Skalli, Natalia G. Berloff, Tatsuhiro Onodera, Ilker Oguz, Francesco Morichetti, Philipp del Hougne, Manuel Le Gallo, Abu Sebastian, Azalia Mirhoseini, Cheng Zhang, Danijela Marković, Daniel Brunner, Christophe Moser, Sylvain Gigan, Florian Marquardt, Aydogan Ozcan, Julie Grollier, Andrea J. Liu, Demetri Psaltis, Andrea Alù, Romain Fleury. Nature volume 645, pages 53–61 (2025) DOI: https://doi.org/10.1038/s41586-025-09384-2 Published: 03 September 2025 Version of record: 03 September 2025 Issue date: 04 September 2025

This paper is behind a paywall.

34th International Joint Conference on Artificial Intelligence (IJCAI): AI at the service of society (August 16 – 22, 2025) in Montréal (Canada)

The International Joint Conferences on Artificial Intelligence (IJCAI) have been going since 1969 and this year, it’s being held in Montréal. Here’s more from an August 15, 2025 International Joint Conferences on Artificial Intelligence news release on EurekAlert,

“AI at the service of society” is the guiding theme of the 34th International Joint Conference on Artificial Intelligence (IJCAI), taking place from August 16 to 22, 2025 in Montreal, Canada. Since its inception in 1969, IJCAI has played a pivotal role as a forum to showcase the frontiers of artificial intelligence research and applications and thus represents the oldest continuously running conference on artificial intelligence.

In 2025, the conference with more than 2000 attendees, has been brought to Canada by Gilles Pesant, the Local Arrangements Committee Chair, Professor in the Department of Computer and Software Engineering at Polytechnique Montréal and IVADO [Institut de valorisation des données] researcher. “What makes IJCAI special is that it brings together the latest research from many different areas of artificial intelligence. It’s a great opportunity for the Canadian AI community to showcase the world-class contributions and outstanding talent,` says the founder of the Quosséça research lab (QUebec Optimization and Satisfaction Strategies Exploiting Constraint Algorithms) and current President of the Association for Constraint Programming. Prof. Pesant is known for developing advanced algorithms for complex scheduling and planning problems. Among his current research interests are neuro-symbolic AI systems which combine machine learning and constraint programming.

Canada’s AI Leadership

This year marks the 30th anniversary of a breakthrough that transformed artificial intelligence by giving machines the ability to learn from and remember sequences such as speech, language, and time-series data – Long Short-Term Memory (LSTM) architecture. While not developed in Canada, the story of LSTM is intertwined with Canada’s leadership in artificial intelligence. During the “AI winter,” when much of the world abandoned neural networks, Canada became a refuge for pioneering AI research. Visionaries like Geoffrey Hinton, now a Nobel Prize winner, and Yoshua Bengio, among others, continued to advance deep learning despite widespread skepticism. Their perseverance and the resilience of the Canadian research community laid the foundation for the AI revolution that is transforming the world today. Canada continues to lead through such institutions as MILA, Vector Institute, AMII, IVADO, and the Canadian AI Safety Institute. 

The IJCAI 2025 program features a lineup of internationally recognised keynote speakers, covering the full spectrum of AI research, including:

Yoshua Bengio, a pioneer in representation learning and one of the godfathers of deep learning. He is a recipient of the 2018 Turing Award—often called the “Nobel Prize of Computing”—which he shares with Geoffrey Hinton and Yann LeCun for demonstrating how deep learning models can scale effectively with large datasets and computational power. Bengio is a professor at the Université de Montréal and the founder of Mila – Quebec AI Institute, one of the world’s largest academic labs dedicated to deep learning, which has helped establish Montreal as a global hub for AI research.

Every time someone uses a search engine or an AI-powered chatbot, they benefit from technologies that bridge the gap between human language and machine understanding — a challenge directly addressed by Heng Ji’s research. An invited IJCAI speaker, Ji is a professor at the University of Illinois Urbana-Champaign, renowned for her pioneering work on how AI systems extract and distill knowledge from vast amounts of unstructured data. Far from being confined to academia, she is also an active voice in AI policy, contributing her expertise to discussions on the ethical and responsible development of AI.

Luc De Raedt, professor of computer science at KU Leuven and director of Leuven.AI, is widely recognized for his pioneering contributions to integrating machine learning with symbolic reasoning. Beyond his research, he has played a significant leadership role in fostering public dialogue on responsible AI, spearheading initiatives and organizing debates on the societal impacts of AI to help shape conversations around ethical and trustworthy AI development. In his IJCAI2025 kenyote address he will talk about ‘Neurosymbolic AI : combining Data and Knowledge’.

In this effort, he is not alone. Bernhard Schölkopf, director at the Max Planck Institute for Intelligent Systems and co-founder of ELLIS (European Laboratory for Learning and Intelligent Systems), is another leading figure giving an invited talk on ‘From ML for science to causal digital twins’. In addition to his scientific contributions — particularly in kernel methods and causal inference — Schölkopf is a prominent advocate for ethical and trustworthy AI in Europe. He plays a key role in shaping AI research agendas and informing policy discussions around responsible AI.

The Montreal program also features invited talks by IJCAI 2025 awardees: Aditya Grover (UCLA and Inception Labs), recipient of the IJCAI-25 Computers and Thought Award; Rina Dechter (University of California, Irvine), recipient of the IJCAI-25 Award for Research Excellence; and Cynthia Rudin (Duke Univeristy), recipient of the IJCAI-25 John McCarthy Award.

The IJCAI 2025 scientific program highlights how AI is shaping both cutting-edge research and real-world impact. The AI, Arts & Creativity track explores AI’s growing role in generating and supporting creative work—from music and design to storytelling and architecture. The Human-Centred AI track addresses the challenges of building AI systems aligned with human values, integrating technical, cognitive, ethical, and societal perspectives. The AI for Social Good track focuses on AI-driven solutions for pressing global challenges, encouraging collaborations with governments, NGOs, and researchers to support initiatives like the UN Sustainable Development Goals. Meanwhile, the AI4Tech track showcases how AI is driving breakthroughs in critical technologies across sectors such as health, finance, mobility, and smart cities. Complementing these thematic tracks, IJCAI 2025 includes as well a set of impactful competitions and challenges to push the boundaries of applied AI, including the Challenge on Deepfake Detection and Localization, the AI for Drinking Water Chlorination Challenge, and the Pulmonary Fibrosis Segmentation Challenge. Together, these elements reflect the pulse of AI today—advancing science while addressing the needs of society. IJCAI 2025 also presents an AI Art Gallery featuring works that examine how machines balance agency and vulnerability, and how their interactions with humans and the environment shape future possibilities. These artworks engage with these questions through AI, robotics, AR, VR, and other emerging technologies.

The program also includes the AI Lounge: Between Wonder and Caution – Insights from Three Experts, an admission-free public discussion featuring science communication journalist in debate with three community representatives: Heng Ji (University of Illinois Urbana-Champaign), Kate Larson (University of Waterloo), and Cynthia Rudin (Duke University).

To support authors who may experience difficulties obtaining Canadian visas, a satellite event will be hosted in Guangzhou, China, from August 29 to August 31, 2025. 

The IJCAI 2025 conference is supported by its sponsors, including the Artificial Intelligence Journal (AIJ) and Palais des Congrès de Montréal (Diamond Sponsor), GMI Cloud, FinVolution Group, and Baidu and Ant Research as Silver Sponsors. 

Full Program

See full program at https://2025.ijcai.org/ 

Organizers and Institutional Support

Conference Chair: Shlomo Zilberstein University of Massachusetts, Amherst / USA

Program Chair: James Kwok, Hong Kong University of Science and Technology / China

Local Arrangements Committee Chair: Gilles Pesant, Polytechnique Montréal / Canada

Local Publicity chair:  Lina Marsso, Assistant Professor, Polytechnique Montréal / MiLA / Canada

Sponsorship / Exhibit / Industry Day Chair: Nancy Laramée, IVADO, Canada

Lead student journalist on social media: Liliane-Caroline Demers, Polytechnique Montreal

Webmaster: Mehil Shah, Dalhousie University, Canada

More information on the IJCAI’s website: https://2025.ijcai.org

Should you be interested in the parent organization, which began life in California, US, you can find out more here.

CHI (computer-human interface) 2025 spotlights KAIST’s pioneering VR/AR precision technology tool & a VR choreography tool

The Korea Advanced Institute of Science and Technology (KAIST) presented two research projects at the CHI (Conference on Human Factors in Computing Systems) April 25 to May 1, 2025. A May 13, 2025 KAIST press release (also on EurekAlert but published May 15, 2025) describes the accomplishments, Note 1: VR is virtual reality and AR is augmented reality; Note 2: Embedded images have been omitted; Note 3: Additional citation information has been added (set off by square brackets) to the press release; Note 4: Both papers (as cited in the press release) are open access on the ACM website,

Accurate pointing in virtual spaces is essential for seamless interaction. If pointing is not precise, selecting the desired object becomes challenging, breaking user immersion and reducing overall experience quality. KAIST researchers have developed a technology that offers a vivid, lifelike experience in virtual space, alongside a new tool that assists choreographers throughout the creative process.

KAIST (President Kwang-Hyung Lee) announced on May 13th that a research team led by Professor Sang Ho Yoon of the Graduate School of Culture Technology, in collaboration with Professor Yang Zhang of the University of California, Los Angeles (UCLA), has developed the ‘T2IRay’ technology and the ‘ChoreoCraft’ platform, which enables choreographers to work more freely and creatively in virtual reality. These technologies received two Honorable Mention awards, recognizing the top 5% of papers, at CHI 2025*, the best international conference in the field of human-computer interaction, hosted by the Association for Computing Machinery (ACM) from April 25 to May 1 [2025].

T2IRay: Enabling Virtual Input with Precision

T2IRay introduces a novel input method that allows for precise object pointing in virtual environments by expanding traditional thumb-to-index gestures. This approach overcomes previous limitations, such as interruptions or reduced accuracy due to changes in hand position or orientation.

The technology uses a local coordinate system based on finger relationships, ensuring continuous input even as hand positions shift. It accurately captures subtle thumb movements within this coordinate system, integrating natural head movements to allow fluid, intuitive control across a wide range.

< Figure 1. T2IRay framework utilizing the delicate movements of the thumb and index fingers for AR/VR pointing >

Professor Sang Ho Yoon explained, “T2IRay can significantly enhance the user experience in AR/VR by enabling smooth, stable control even when the user’s hands are in motion.”

This study, led by first author Jina Kim, was supported by the Excellent New Researcher Support Project of the National Research Foundation of Korea under the Ministry of Science and ICT, as well as the University ICT Research Center (ITRC) Support Project of the Institute of Information and Communications Technology Planning and Evaluation (IITP).

▴ Paper title: T2IRay: Design of Thumb-to-Index Based Indirect Pointing for Continuous and Robust AR/VR Input [by Jina Kim, Yang Zhang, Sang Ho Yoon. CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems Article No.: 1059, Pages 1 – 21 https://doi.org/10.1145/3706598.3714220 Published: 25 April 2025]
▴ Paper link: https://doi.org/10.1145/3706598.3713442
▴ T2IRay demo video: https://youtu.be/ElJlcJbkJPY

ChoreoCraft: Creativity Support through VR for Choreographers

In addition, Professor Yoon’s team developed ‘ChoreoCraft,’ a virtual reality tool designed to support choreographers by addressing the unique challenges they face, such as memorizing complex movements, overcoming creative blocks, and managing subjective feedback.

ChoreoCraft reduces reliance on memory by allowing choreographers to save and refine movements directly within a VR space, using a motion-capture avatar for real-time interaction. It also enhances creativity by suggesting movements that naturally fit with prior choreography and musical elements. Furthermore, the system provides quantitative feedback by analyzing kinematic factors like motion stability and engagement, helping choreographers make data-driven creative decisions.

< Figure 2. ChoreoCraft’s approaches to encourage creative process >

Professor Yoon noted, “ChoreoCraft is a tool designed to address the core challenges faced by choreographers, enhancing both creativity and efficiency. In user tests with professional choreographers, it received high marks for its ability to spark creative ideas and provide valuable quantitative feedback.”

This research was conducted in collaboration with doctoral candidate Kyungeun Jung and master’s candidate Hyunyoung Han, alongside the Electronics and Telecommunications Research Institute (ETRI) and One Million Co., Ltd. [1MILLION DANCE STUDIO?] (CEO Hye-rang Kim [aka, Lia Kim?]), with support from the Cultural and Arts Immersive Service Development Project by the Ministry of Culture, Sports and Tourism.

▴ Paper title: ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tools by Jina Kim, Yang Zhang, Sang Ho Yoon. CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems Article No.: 854, Pages 1 – 16 DOI: https://doi.org/10.1145/3706598.3713442 Published: 25 April 2025
▴ Paper link: https://doi.org/10.1145/3706598.3714220
▴ ChoreoCraft demo video: https://youtu.be/Ms1fwiSBjjw

*CHI (Conference on Human Factors in Computing Systems): The premier international conference on human-computer interaction, organized by the ACM, was held this year from April 25 to May 1, 2025.

I did a little more digging with regard to Lia Kim (Hye-rang Kim) and found this précis from her profile on thefamouspeople.com,

Lia Kim is a Korean hip-hop dancer and choreographer. Her incredible choreography routines display a unique combination of street funk and urban hip-hop. She is best known for her intricate finger-tutting skills. Lia is the CEO of the Seoul-based ‘1Million Dance Studio.’ The studio is a melting pot of various dance forms from around the world and is a place where students connect with one another through the joy of dancing. Since Lia is a professional choreographer, she majorly focuses on detailing and technicality. This makes her dance routines an artwork. Lia has performed at several national and international events and has judged a few dance reality shows, too. She has choreographed a number of concerts and events and has also created dance routines for a number of celebrities.

1Million Dance Studio can be found here.

Measuring brainwaves with temporary tattoo on scalp

Caption: EEG setup with e-tattoo electrodes Credit: Nanshu Lu

A December 2, 2024 news item on ScienceDaily announces development of a liquid ink that can measure brainwaves,

For the first time, scientists have invented a liquid ink that doctors can print onto a patient’s scalp to measure brain activity. The technology, presented December 2 [2024] in the Cell Press journal Cell Biomaterials, offers a promising alternative to the cumbersome process currently used for monitoring brainwaves and diagnosing neurological conditions. It also has the potential to enhance non-invasive brain-computer interface applications.

The December 2, 2024 Cell Press press release on Eurekalert, which originated the news item, claims this is a hair-friendly e-tattoo even though the model has a shaved head (perhaps that was for modeling purposes only?),

“Our innovations in sensor design, biocompatible ink, and high-speed printing pave the way for future on-body manufacturing of electronic tattoo sensors, with broad applications both within and beyond clinical settings,” says Nanshu Lu, the paper’s co-corresponding author at the University of Texas at Austin.

Electroencephalography (EEG) is an important tool for diagnosing a variety of neurological conditions, including seizures, brain tumors, epilepsy, and brain injuries. During a traditional EEG test, technicians measure the patient’s scalp with rulers and pencils, marking over a dozen spots where they will glue on electrodes, which are connected to a data-collection machine via long wires to monitor the patient’s brain activity. This setup is time consuming and cumbersome, and it can be uncomfortable for many patients, who must sit through the EEG test for hours.

Lu and her team have been pioneering the development of small sensors that track bodily signals from the surface of human skin, a technology known as electronic tattoos, or e-tattoos. Scientists have applied e-tattoos to the chest to measure heart activities, on muscles to measure how fatigued they are, and even under the armpit to measure components of sweat.

In the past, e-tattoos were usually printed on a thin layer of adhesive material before being transferred onto the skin, but this was only effective on hairless areas.

“Designing materials that are compatible with hairy skin has been a persistent challenge in e-tattoo technology,” Lu says. To overcome this, the team designed a type of liquid ink made of conductive polymers. The ink can flow through hair to reach the scalp, and once dried, it works as a thin-film sensor, picking up brain activity through the scalp.

Using a computer algorithm, the researchers can design the spots for EEG electrodes on the patient’s scalp. Then, they use a digitally controlled inkjet printer to spray a thin layer of the e-tattoo ink on to the spots. The process is quick, requires no contact, and causes no discomfort in patients, the researchers said.

The team printed e-tattoo electrodes onto the scalps of five participants with short hair. They also attached conventional EEG electrodes next to the e-tattoos. The team found that the e-tattoos performed comparably well at detecting brainwaves with minimal noise.

After six hours, the gel on the conventional electrodes started to dry out. Over a third of these electrodes failed to pick up any signal, although most the remaining electrodes had reduced contact with the skin, resulting in less accurate signal detection. The e-tattoo electrodes, on the other hand, showed stable connectivity for at least 24 hours.

Additionally, researchers tweaked the ink’s formula and printed e-tattoo lines that run down to the base of the head from the electrodes to replace the wires used in a standard EEG test. “This tweak allowed the printed wires to conduct signals without picking up new signals along the way,” says co-corresponding author Ximin He of the University of California, Los Angeles.

The team then attached much shorter physical wires between the tattoos to a small device that collects brainwave data. The team said that in the future, they plan to embed wireless data transmitters in the e-tattoos to achieve a fully wireless EEG process.

“Our study can potentially revolutionize the way non-invasive brain-computer interface devices are designed,” says co-corresponding author José Millán of the University of Texas at Austin. Brain-computer interface devices work by recording brain activities associated with a function, such as speech or movement, and use them to control an external device without having to move a muscle. Currently, these devices often involve a large headset that is cumbersome to use. E-tattoos have the potential to replace the external device and print the electronics directly onto a patient’s head, making brain-computer interface technology more accessible, Millán says.  

Here’s a link to and a citation for the paper,

On-scalp printing of personalized electroencephalography e-tattoos by Luize Scalco de Vasconcelos, Yichen Yan, Pukar Maharjan, Satyam Kumar, Minsu Zhang, Bowen Yao, Hongbian Li, Sidi Duan, Eric Li, Eric Williams, Sandhya Tiku, Pablo Vidal, R. Sergio Solorzano-Vargas, Wen Hong, Yingjie Du, Zixiao Liu, Fumiaki Iwane, Charles Block, Andrew T. Repetski, Philip Tan, Pulin Wang, Martin G. Martın, José del R. Millán, Ximin He, Nanshu Lu. Cell Biomaterials, 2024 DOI: 10.1016/j.celbio.2024.100004 Copyright: © 2024 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies

This is paper is open access but you are better off downloading the PDF version.

Physical neural network based on nanowires can learn and remember ‘on the fly’

A November 1, 2023 news item on Nanowerk announced new work on neuromorphic engineering from Australia,

For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.

Key Takeaways
*The nanowire-based system can learn and remember ‘on the fly,’ processing dynamic, streaming data for complex learning and memory tasks.

*This advancement overcomes the challenge of heavy memory and energy usage commonly associated with conventional machine learning models.

*The technology achieved a 93.4% accuracy rate in image recognition tasks, using real-time data from the MNIST database of handwritten digits.

*The findings promise a new direction for creating efficient, low-energy machine intelligence applications, such as real-time sensor data processing.

Nanowire neural network
Caption: Electron microscope image of the nanowire neural network that arranges itself like ‘Pick Up Sticks’. The junctions where the nanowires overlap act in a way similar to how our brain’s synapses operate, responding to electric current. Credit: The University of Sydney

A November 1, 2023 University of Sydney news release (also on EurekAlert), which originated the news item, elaborates on the research,

Published today [November 1, 2023] in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”

Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.

Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.

In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Supervising researcher Professor Zdenka Kuncic said the memory task was similar to remembering a phone number. The network was also used to perform a benchmark image recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning.

“Our previous research established the ability of nanowire networks to remember simple tasks. This work has extended these findings by showing tasks can be performed using dynamic data accessed online,” she said.

“This is a significant step forward as achieving an online learning capability is challenging when dealing with large amounts of data that can be continuously changing. A standard approach would be to store data in memory and then train a machine learning model using that stored information. But this would chew up too much energy for widespread application.

“Our novel approach allows the nanowire neural network to learn and remember ‘on the fly’, sample by sample, extracting data online, thus avoiding heavy memory and energy usage.”

Mr Zhu said there were other advantages when processing information online.

“If the data is being streamed continuously, such as it would be from a sensor for instance, machine learning that relied on artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimised for,” he said.

In this study, the nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images. The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Here’s a link to and a citation for the paper,

Online dynamical learning and sequence memory with neuromorphic nanowire networks by Ruomin Zhu, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski & Zdenka Kuncic. Nature Communications volume 14, Article number: 6697 (2023) DOI: https://doi.org/10.1038/s41467-023-42470-5 Published: 01 November 2023

This paper is open access.

You’ll notice a number of this team’s members are also listed in the citation in my June 21, 2023 posting “Learning and remembering like a human brain: nanowire networks” and you’ll see some familiar names in the citation in my June 17, 2020 posting “A tangle of silver nanowires for brain-like action.”

Questioning or rewriting a ‘central’ dogma of biology?

Answering the question in the head, this December 12, 2023 news item on phys.org calls into question the principle behind how medicines based on antibodies work,

Today, medicines based on antibodies—proteins that fight infection and disease—are prescribed for everything from cancer to COVID-19 to high cholesterol. The antibody drugs are supplied by genetically-engineered cells that function as tiny protein-producing factories in the laboratory.

Meanwhile, researchers have been targeting cancer, injuries to internal organs and a host of other ailments with new strategies in which similarly engineered cells are implanted directly into patients.

These biotechnology applications rely on the principle that altering a cell’s DNA to produce more of the genetic instructions for making a given protein will cause the cell to release more of that protein.

A new UCLA [University of California at Los Angeles] study suggests that—at least in one type of stem cell—the principle doesn’t necessarily hold true.

A December 11, 2023 UCLA news release, which originated the news item, delves further into the topic but first the key points are noted, Note: Links have been removed,

Key takeaways

  • Mesenchymal stem cells, found in bone marrow, secrete therapeutic proteins that could potentially help regenerate damaged tissue.
  • A UCLA study examining these cells challenges the conventional understanding of which genetic instructions prompt the release of these therapeutic proteins.
  • The findings could help advance both regenerative medicine research and the laboratory production of biologic treatments already in use.

The researchers examined mesenchymal stem cells, which reside in bone marrow and can self-renew or develop into bone, fat or muscle cells. Mesenchymal cells secrete a protein growth factor called VEGF-A, which plays a role in regenerating blood vessels and which scientists believe may have the potential to repair damage from heart attacks, kidney injuries, arterial disease in limbs and other conditions.

When the researchers compared the amount of VEGF-A that each mesenchymal cell released with the expression of genes in the same cell that code for VEGF-A, the results were surprising: Gene expression correlated only weakly with the actual secretion of the growth factor.  

The scientists identified other genes better correlating with growth factor secretion, including one that codes for a protein found on the surface of some stem cells. Isolating stem cells with that protein on their surface, the team cultivated a population that secreted VEGF-A prolifically and kept doing so days later.

The findings, published today [December 11, 2023] in Nature Nanotechnology, suggest that a fundamental assumption in biology and biotechnology may be up for reconsideration, said co-corresponding author Dino Di Carlo, the Armond and Elena Hairapetian Professor of Engineering and Medicine at the UCLA Samueli School of Engineering.

“The central dogma has been, you have instructions in the DNA, they’re transcribed to RNA, and then the RNA is translated into protein,” said Di Carlo, who is also a member of UCLA’s California NanoSystems Institute and Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research. “Based on this, many scientists assumed that if you had more RNA, you’d have more protein, and then more protein released from the cell. We questioned that assumption.

“It seems we can’t assume that if a gene is expressed at higher levels, there will be higher secretion of the corresponding protein. We found a clear example where that doesn’t happen, and it opens up a lot of new questions.”

The results could help make the manufacturing of antibody-based treatments more efficient and define new cellular treatments that would be more effective. Knowing the right genetic switches to flip could enable the engineering or selection of extraordinarily productive cells for making or delivering therapies.

The UCLA study was conducted using standard lab equipment augmented with a technology invented by Di Carlo and his colleagues: nanovials, microscopic bowl-shaped hydrogel containers, each of which captures a single cell and its secretions. Leveraging a new nanovial-enabled analytic method, the scientists were able to connect the amount of VEGF-A released by each one of 10,000 mesenchymal stem cells to an atlas mapping tens of thousands of genes expressed by that same cell.

“The ability to link protein secretion to gene expression on the single-cell level holds great promise for the fields of life science research and therapeutic development,” said Kathrin Plath, a UCLA professor of biological chemistry, a member of the Broad Stem Cell Research Center and a co-corresponding author of the study. “Without it, we couldn’t have arrived at the unexpected results we found in this study. Now we have an exciting opportunity to learn new things about the mechanisms underpinning the basic processes of life and use what we learn to advance human health.”

While activation of the genetic instructions for VEGF-A displayed little correlation with release of the protein, the researchers identified a cluster of 153 genes with strong links to VEGF-A secretion. Many of them are known for their function in blood vessel development and wound healing; for others, their function is currently unknown.

One of the top matches encodes a cell-surface protein, IL13RA2, whose purpose is poorly understood. Its exterior location made it simpler for the scientists to use it as a marker and separate those cells from the others. Cells with IL13RA2 showed 30% more VEGF-A secretion than cells that lacked the marker.

In a similar experiment, the researchers kept the separated cells in culture for six days. At the end of that time, cells with the marker secreted 60% more VEGF-A compared to cells without it.

Although therapies based on mesenchymal stem cells have shown promise in laboratory studies, clinical trials with human participants have shown many of these new options to be safe but not effective. The ability to sort for high VEGF-A secreters using IL13RA2 may help turn that tide.

“Identifying a subpopulation that produces more, and markers associated with that population, means you can separate them out very easily,” Di Carlo said. “A very pure population of cells that’s going to produce high levels of your therapeutic protein should make a better therapy.”

Nanovials are available commercially from Partillion Bioscience, a company co-founded by Di Carlo that started up at the CNSI’s on-campus incubator, Magnify.

The first author of the study is Shreya Udani, who earned a doctorate from UCLA in 2023. Other co-authors, all affiliated with UCLA, are staff scientist Justin Langerman; Doyeon Koo, who earned a doctorate in 2023; graduate students Sevana Baghdasarian and Citradewi Soemardy; undergraduate Brian Cheng; Simran Kang, who earned a bachelor’s degree in 2023; and Joseph de Rutte, who earned a doctorate in 2020 and is a co-founder and CEO of Partillion.

The study was supported by the National Institutes of Health and a Stem Cell Nanomedicine Planning Award funded jointly by the CNSI and the Broad Stem Cell Research Center.

Researcher Dino Di Carlo describes his work,

Nanovials, a technology created by UCLA’s Dino Di Carlo and his colleagues, allowed researchers to capture single mesenchymal cells and their secretions. Withouth these vials, which are smaller than the width of a human hair, “we couldn’t have arrived at the unexpected results we found in this study,” said UCLA’s Kathrin Plath.

Here’s a link to and a citation for the paper,

Associating growth factor secretions and transcriptomes of single cells in nanovials using SEC-seq by Shreya Udani, Justin Langerman, Doyeon Koo, Sevana Baghdasarian, Brian Cheng, Simran Kang, Citradewi Soemardy, Joseph de Rutte, Kathrin Plath & Dino Di Carlo. Nature Nanotechnology (2023) DOI: https://doi.org/10.1038/s41565-023-01560-7 Published: 11 December 2023

This paper is behind a paywall.

As for the two companies mentioned in the news release, you find Partillion Bioscience here and Magnify at CNSI here.

Reversing lower limb paralysis

This regenerative treatment is at a very early stage, which means the Swiss researchers have tried it on mice as you can see in the following video (runtime: 2 mins. 15 secs.). Towards the end of the video, researcher Grégoire Courtine cautions there are many hurdles before this could be used in humans, if ever,

A September 22, 2023 Ecole Polytechnique Fédérale de Lausanne (EPFL) press release (also on EurekAlert but published September 21, 2023) by Emmanuel Barraud, describes the work in more detail,

When the spinal cords of mice and humans are partially damaged, the initial paralysis is followed by the extensive, spontaneous recovery of motor function. However, after a complete spinal cord injury, this natural repair of the spinal cord doesn’t occur and there is no recovery. Meaningful recovery after severe injuries requires strategies that promote the regeneration of nerve fibers, but the requisite conditions for these strategies to successfully restore motor function have remained elusive.

“Five years ago, we demonstrated that nerve fibers can be regenerated across anatomically complete spinal cord injuries,” says Mark Anderson, a senior author of the study. “But we also realized this wasn’t enough to restore motor function, as the new fibers failed to connect to the right places on the other side of the lesion.” Anderson is the director of Central Nervous System Regeneration at .NeuroRestore and a scientist at the Wyss Center for Bio and Neuroengineering.

Working in tandem with peers at UCLA [University of California at Los Angeles] and Harvard Medical School, the scientists used state-of-the-art equipment at EPFL’s Campus Biotech facilities in Geneva to run in-depth analyses and identity which type of neuron is involved in natural spinal-cord repair after partial spinal cord injury. “Our observations using single-cell nuclear RNA sequencing not only exposed the specific axons that must regenerate, but also revealed that these axons must reconnect to their natural targets to restore motor function,” says Jordan Squair, the study’s first author. The team’s findings appear in the 22 September 2023 issue of Science.

Towards a combination of approaches

Their discovery informed the design of a multipronged gene therapy. The scientists activated growth programs in the identified neurons in mice to regenerate their nerve fibers, upregulated specific proteins to support the neurons’ growth through the lesion core, and administered guidance molecules to attract the regenerating nerve fibers to their natural targets below the injury. “We were inspired by nature when we designed a therapeutic strategy that replicates the spinal-cord repair mechanisms occurring spontaneously after partial injuries,” says Squair.

Mice with anatomically complete spinal cord injuries regained the ability to walk, exhibiting gait patterns that resembled those quantified in mice that resumed walking naturally after partial injuries. This observation revealed a previously unknown condition for regenerative therapies to be successful in restoring motor function after neurotrauma. “We expect that our gene therapy will act synergistically with our other procedures involving electrical stimulation of the spinal cord,” says Grégoire Courtine, a senior author of the study who also heads .NeuroRestore together with Jocelyne Bloch. “We believe a complete solution for treating spinal cord injury will require both approaches – gene therapy to regrow relevant nerve fibers, and spinal stimulation to maximize the ability of both these fibers and the spinal cord below the injury to produce movement.”

While many obstacles must still be overcome before this gene therapy can be applied in humans, the scientists have taken the first steps towards developing the technology necessary to achieve this feat in the years to come.

Here’s a link to and a citation for the paper,

Recovery of walking after paralysis by regenerating characterized neurons to their natural target region by Jordan W. Squair, Marco Milano, Alexandra de Coucy, Matthieu Gautier, Michael A. Skinnider, Nicholas D. James, Newton Cho, Anna Lasne, Claudia Kathe,Thomas H. Hutson, Steven Ceto, Laetitia Baud, Katia Galan, Viviana Aureli, Achilleas Laskaratos, Quentin Barraud, Timothy J. Deming, Richie E. Kohman, Bernard L. Schneider, Zhigang He, Jocelyne Bloch, Michael V. Sofroniew, Gregoire Courtine, and Mark A. Anderson. Science 21 Sep 2023 Vol 381, Issue 6664 pp. 1338-1345 DOI: 10.1126/science.adi641

This paper is behind a paywall.

This March 25, 2015 posting, “Spinal cords, brains, implants, and remote control,” features some research from EPFL researchers whose names you might recognize from this posting’s research paper.

Mentioned in the press release, the Swiss research centre website for NeuroRestore is here.