Tag Archives: Yuhang Li

A couple of proposed solutions to AI’s insatiable need for power?

I have two stories about research into making artificial intelligence (AI) less wasteful of power. One is from the International Society for Optics and Photonics (SPIE) and the other from the Politecnico di Milano (Polytechnic of Milan).

International Society for Optics and Photonics (SPIE)

A September 9, 2025 news item on ScienceDaily announced a more energy efficient AI chip,

Artificial intelligence (AI) systems are increasingly central to technology, powering everything from facial recognition to language translation. But as AI models grow more complex, they consume vast amounts of electricity — posing challenges for energy efficiency and sustainability. A new chip developed by researchers at the University of Florida could help address this issue by using light, rather than just electricity, to perform one of AI’s most power-hungry tasks. Their research is reported in Advanced Photonics.

A September 8, 2025 SPIE (International Society for Optics and Photonics) press release, which originated the news item, provides more detail about the work, Note: Links have been removed,

The chip is designed to carry out convolution operations, a core function in machine learning that enables AI systems to detect patterns in images, video, and text. These operations typically require significant computing power. By integrating optical components directly onto a silicon chip, the researchers have created a system that performs convolutions using laser light and microscopic lenses—dramatically reducing energy consumption and speeding up processing.

“Performing a key machine learning computation at near zero energy is a leap forward for future AI systems,” said study leader Volker J. Sorger, the Rhines Endowed Professor in Semiconductor Photonics at the University of Florida. “This is critical to keep scaling up AI capabilities in years to come.”

In tests, the prototype chip classified handwritten digits with about 98 percent accuracy, comparable to traditional electronic chips. The system uses two sets of miniature Fresnel lenses—flat, ultrathin versions of the lenses found in lighthouses—fabricated using standard semiconductor manufacturing techniques. These lenses are narrower than a human hair and are etched directly onto the chip.

To perform a convolution, machine learning data is first converted into laser light on the chip. The light passes through the Fresnel lenses, which carry out the mathematical transformation. The result is then converted back into a digital signal to complete the AI task.

“This is the first time anyone has put this type of optical computation on a chip and applied it to an AI neural network,” said Hangbo Yang, a research associate professor in Sorger’s group at UF and co-author of the study.

The team also demonstrated that the chip could process multiple data streams simultaneously by using lasers of different colors—a technique known as wavelength multiplexing. “We can have multiple wavelengths, or colors, of light passing through the lens at the same time,” Yang said. “That’s a key advantage of photonics.”

The research was conducted in collaboration with the Florida Semiconductor Institute, UCLA [University of California at Los Angeles], and George Washington University. Sorger noted that chip manufacturers such as NVIDIA already use optical elements in some parts of their AI systems, which could make it easier to integrate this new technology.

“In the near future, chip-based optics will become a key part of every AI chip we use daily,” Sorger said. “And optical AI computing is next.”

There’s also a September 8, 2025 University of Florida news release (also on EurekAlert), which is similar to the one issued by SPIE.

The paper has been published on two different sites; the citation for the paper remains the same and there are links to two different sites hosting the paper,

Near-energy-free photonic Fourier transformation for convolution operation acceleration by Hangbo Yang, Nicola Peserico, Shurui Li, Xiaoxuan Ma, Russell L. T. Schwartz, Mostafa Hosseini, Aydin Babakhani, Chee Wei Wong, Puneet Gupta, Volker J. Sorger SPIE Digital library or Advanced Photonics Vol. 7, Issue 5, 056007 (2025) DOI: 10.1117/1.AP.7.5.056007

Both sites offer open access to the paper.

Politecnico di Milano (Polytechnic of Milan)

Caption: The photonic microchip (below) developed for the study on physical neural networks, along with the electronic chip (above, the yellow one) of control. Credit: Politecnico di Milano, DEIB – Department of Electronics, Information and Bioengineering

A September 12, 2025 Politecnico di Milano (Polytechnic of Milan) press release (also on EurekAlert but published September 9, 2025) announces work into a more energy efficient way to train artificial intelligence, specifically physical neural networks,

Artificial intelligence is now part of our daily lives, with the subsequent pressing need for larger, more complex models. However, the demand for ever-increasing power and computing capacity is rising faster than the performance traditional computers can provide.

To overcome these limitations, research is moving towards innovative technologies such as physical neural networks, analogue circuits that directly exploit the laws of physics (properties of light beams, quantum phenomena) to process information. Their potential is at the heart of the study published by the prestigious journal Nature. It is the outcome of collaboration between several international institutes, including the Politecnico di Milano, the École Polytechnique Fédérale in Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The article entitled “Training of Physical Neural Networks” discusses the steps of research on training physical neural networks, carried out with the collaboration of Francesco Morichetti, professor at DEIB – Department of Electronics, Information and Bioengineering, and head of the university’s Photonic Devices Lab.

Politecnico di Milano contributed to this study by developing photonic chips for the creation of neural networks, exploiting integrated photonic technologies. Mathematical operations, such as sums and multiplications, can now be performed through light interference mechanisms on silicon microchips barely a few square millimetres in size.

By eliminating the operations required for the digitisation of information, our photonic chips allow calculations to be carried out with a significant reduction in both energy consumption and processing time,” says Francesco Morichetti. A step forward to make artificial intelligence (which relies on extremely energy-intensive data centres) more sustainable.

The study published in Nature addresses the theme of training, precisely the phase in which the network learns to perform certain tasks. «With our research within the Department of Electronics, Information and Bioengineering, we have helped develop an “in-situ” training technique for photonic neural networks, i.e. without going through digital models. The procedure is carried out entirely using light signals. Hence, network training will not only be faster, but also more robust and efficient», adds Morichetti.

The use of photonic chips will allow the development of more sophisticated models for artificial intelligence, or devices capable of processing real-time data directly on site – such as autonomous cars or intelligent sensors integrated into portable devices – without requiring remote processing.

Here’s a link to and a citation for the paper,

Training of physical neural networks by Ali Momeni, Babak Rahmani, Benjamin Scellier, Logan G. Wright, Peter L. McMahon, Clara C. Wanjura, Yuhang Li, Anas Skalli, Natalia G. Berloff, Tatsuhiro Onodera, Ilker Oguz, Francesco Morichetti, Philipp del Hougne, Manuel Le Gallo, Abu Sebastian, Azalia Mirhoseini, Cheng Zhang, Danijela Marković, Daniel Brunner, Christophe Moser, Sylvain Gigan, Florian Marquardt, Aydogan Ozcan, Julie Grollier, Andrea J. Liu, Demetri Psaltis, Andrea Alù, Romain Fleury. Nature volume 645, pages 53–61 (2025) DOI: https://doi.org/10.1038/s41586-025-09384-2 Published: 03 September 2025 Version of record: 03 September 2025 Issue date: 04 September 2025

This paper is behind a paywall.

Split some water molecules and save solar and wind (energy) for a future day

Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,

We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.

The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

A March 24, 2016 University of Toronto news release by Marit Mitchell, which originated the news item, expands on the theme,

“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”

You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.

This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.

Details are offered in the news release,

The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …

“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .

“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”

This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.

“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”

“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.

“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”

Here’s a link to and a citation for the paper,

Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science  24 Mar 2016: DOI: 10.1126/science.aaf1525

This paper is behind a paywall.