Tag Archives: Pusan National University

Golden eyes (not a James Bond movie): how gold nanoparticles may one day help to restore people’s vision

Caption: In a study published in the journal ACS Nano and supported by the National Institutes of Health, the research team showed that nanoparticles injected into the retina can successfully stimulate the visual system and restore vision in mice with retinal disorders. The findings suggest that a new type of visual prosthesis system in which nanoparticles, used in combination with a small laser device worn in a pair of glasses or goggles, might one day help people with retinal disorders to see again. Credit: Jiarui Nie / Brown University

An April 16, 2024 news item on ScienceDaily announces work on a retinal prosthesis that in the future could restore vision,

A new study by Brown University researchers suggests that gold nanoparticles — microscopic bits of gold thousands of times thinner than a human hair — might one day be used to help restore vision in people with macular degeneration and other retinal disorders.

In a study published in the journal ACS [American Chemical Society] Nano and supported by the [US] National Institutes of Health, the research team showed that nanoparticles injected into the retina can successfully stimulate the visual system and restore vision in mice with retinal disorders. The findings suggest that a new type of visual prosthesis system in which nanoparticles, used in combination with a small laser device worn in a pair of glasses or goggles, might one day help people with retinal disorders to see again.

An April 16, 2025 Brown University news release (also on EurekAlert), which originated the news item, provides more technical detail about research into a retinal prosthetic that is not require a brain implant or genetic modification, Note: Links have been removed,

“This is a new type of retinal prosthesis that has the potential to restore vision lost to retinal degeneration without requiring any kind of complicated surgery or genetic modification,” said Jiarui Nie, a postdoctoral researcher at the [US] National Institutes of Health who led the research while completing her Ph.D. at Brown. “We believe this technique could potentially transform treatment paradigms for retinal degenerative conditions.” 

Nie performed the work while working in the lab of Jonghwan Lee, an associate professor in Brown’s School of Engineering and a faculty affiliate at Brown’s Carney Institute for Brain Science, who oversaw the work and served as the study’s senior author. 

Retinal disorders like macular degeneration and retinitis pigmentosa affect millions of people in the U.S. and around the world. These conditions damage light-sensitive cells in the retina called photoreceptors — the “rods” and “cones” that convert light into tiny electric pulses. Those pulses stimulate other types of cells further up the visual chain called bipolar and ganglion cells, which process the photoreceptor signals and send them along to the brain. 

This new approach uses nanoparticles injected directly into the retina to bypass damaged photoreceptors. When infrared light is focused on the nanoparticles, they generate a tiny amount of heat that activates bipolar and ganglion cells in much the same way that photoreceptor pulses do. Because disorders like macular degeneration affect mostly photoreceptors while leaving bipolar and ganglion cells intact, the strategy has the potential to restore lost vision. 

In this new study, the research team tested the nanoparticle approach in mouse retinas and in living mice with retinal disorders. After injecting a liquid nanoparticle solution, the researchers used patterned near-infrared laser light to project shapes onto the retinas. Using a calcium signal to detect cellular activity, the team confirmed that the nanoparticles were exciting bipolar and ganglion cells in patterns matched the shapes projected by the laser.

The experiments showed that neither the nanoparticle solution nor the laser stimulation caused detectable adverse side effects, as indicated by metabolic markers for inflammation and toxicity. Using probes, the researchers confirmed that laser stimulation of the nanoparticles caused increased activity in the visual cortices of the mice — an indication that previously absent visual signals were being transmitted and processed by the brain. That, the researchers say, is a sign that vision had been at least partially restored, a good sign for potentially translating a similar technology to humans. 

For human use, the researchers envision a system that combines the nanoparticles with a laser system mounted in a pair of glasses or goggles. Cameras in the goggles would gather image data from the outside world and use it to drive the patterning of an infrared laser. The laser pulses would then stimulate the nanoparticles in people’s retinas, enabling them to see. 

The approach is similar to one that was approved by the Food and Drug Administration for human use a few years ago. The older approach combined a camera system with a small electrode array that was surgically implanted in the eye. The nanoparticle approach has several key advantages, according to Nie.

For starters, it’s far less invasive. As opposed to surgery, “an intravitreal injection is one of the simplest procedures in ophthalmology,” Nie said. 

There are functional advantages as well. The resolution of the previous approach was limited by the size of the electrode array — about 60 square pixels. Because the nanoparticle solution covers the whole retina, the new approach could potentially cover someone’s full field of vision. And because the nanoparticles respond to near-infrared light as opposed to visual light, the system doesn’t necessarily interfere with any residual vision a person may retain.   

More work needs to be done before the approach can be tried in a clinical setting, Nie said, but this early research suggests that it’s possible.

“We showed that the nanoparticles can stay in the retina for months with no major toxicity,” Nie said of the research. “And we showed that they can successfully stimulate the visual system. That’s very encouraging for future applications.”

The research was funded by the National Institutes of Health’s National Eye Institute (R01EY030569), the China Scholarship Council scholarship, the Saudi Arabian Cultural Mission scholarship, and South Korea’s Alchemist Project Program (RS-2024-00422269). Co-authors also include Professor Kyungsik Eom from Pusan National University, Brown Professor Tao Lui, [? See citation below] as well as Brown students Hafithe M. Al Ghosain, Alexander Neifert, Aaron Cherian, Gaia Marie Gerbaka, and Kristine Y. Ma.

Here’s a link to and a citation for the paper,

Intravitreally Injected Plasmonic Nanorods Activate Bipolar Cells with Patterned Near-Infrared Laser Projection by Jiarui Nie, Kyungsik Eom, Hafithe M. AlGhosain, Alexander Neifert, Aaron Cherian, Gaia Marie Gerbaka, Kristine Y. Ma, Tao Liu, Jonghwan Lee. ACS Nano 2025, 19, 12, 11823–11840 DOI: https://doi.org/10.1021/acsnano.4c14061 Published: March 20, 2025 Copyright © 2025 American Chemical Society

This paper is behind a paywall.

Pusan National University researchers explore artificial intelligence (AI) for designing fashion

Caption: Researchers from Pusan National University in Korea have conducted an in-depth study exploring the use of collaborative AI models to create new designs and the engagement of complex systems. This encourages human-AI collaborative designing which increases efficiency and improves sustainability. Credit:Yoon Kyung Lee from Pusan National University

A Korean researcher is exploring what a collaborative relationship between fashion designers and artificial intelligence (AI) would look like according to a January 6 ,2023 Pusan National University press release (also on EurekAlert but published January 12, 2023),

The use of artificial intelligence (AI) in the fashion industry has grown significantly in recent years. AI is being used for tasks such as personalizing fashion recommendations for customers, optimizing supply chain management, automating processes, and improving sustainability to reduce waste. However, creative processes in fashion designing continue to be human driven, mostly, and not a lot of research exists in the realm of using AI for designing in fashion. Moreover, studies are generally done with data scientists, who build the AI platforms and are involved with the technologic aspect of the process. However, the other side of this equation, i.e., designers themselves, are not roped into research often.

To investigate the practical applicability of AI models to implement creative designs and work with human designers, Assistant Professor Prof. Yoon Kyung Lee from Pusan National University in Korea conducted an in-depth study. Her study was made available online in Thinking Skills and Creativity on September 15, 2022, and subsequently published in Volume 46 of the Journal in December 2022.

At a time when AI is so deeply ingrained into our lives, this study started instead with considering what a human can do better than AI,” says Prof. Lee, explaining her motivation behind the study. “Could there be an effective collaboration between humans and AI for the purpose of creative design?”

Prof. Lee started with generating new textile designs using deep convolution generative adversarial networks (DC-GANs) and cycle-GANs. The outputs from these models were compared to similar designs produced by design students.

The comparison revealed that though designs produced by both were similar, the biggest difference was the uniqueness and originality seen in the human designs, which came from the person’s experiences. However, the use of AI in repetitive tasks can improve the efficiency of designers and frees up their time to focus on more high-difficulty creative work. AI-generated designs can also be used as a learning tool for people who lack expertise in fashion want to explore their creativity. These people can create designs with assistance from AI.  Thus, Prof. Lee proposes a human-AI collaborative network that integrates GANs with human creativity to produce designs. The professor also defined and studied the various elements of a complex system that are involved in human-AI collaborated design. She also went on to establish a human-AI model in which the designer collaborates with AI to create a novel design idea. The model is built in such a way that if the designer shares their creative process and ideas with others, the system can interconnect and evolve, thereby improving its designs.

The fashion industry can leverage this to foresee changes in the fashion industry and offer recommendations and co-creation services. Setting objectives, variables, and limits is part of the designer’s job in the Human-AI collaborative design environment. Therefore, their work should go beyond only the visual aspect and instead cover a variety of disciplines.

In the future, everybody will be able to be a creator or designer with the help of AI models. So far, only professional fashion designers have been able to design and showcase clothes. But in the future, it will be possible for anyone to design the clothes they want and showcase their creativity,” concludes Prof. Lee.

We hope her dreams are very close to realization!

This is the first time I’ve seen a press release where the writer wishes well for the researcher. Nice touch!

Here’s a link to and a citation for the paper,

How complex systems get engaged in fashion design creation: Using artificial intelligence by Yoon Kyung Lee. Thinking Skills and Creativity Volume 46, December 2022, 101137 DOI: https://doi.org/10.1016/j.tsc.2022.101137

This paper is behind a paywall.

Is that a window or an LCD (liquid crystal display) screen?

I’m  not sure how I feel about the potential advent of yet another screen in my life. From an April 28, 2015 news item on Nanowerk,

The secret desire of urban daydreamers staring out their office windows at the sad brick walls of the building opposite them may soon be answered thanks to transparent light shutters developed by a group of researchers at Pusan National University in South Korea.

A novel liquid crystal technology allows displays to flip between transparent and opaque states — hypothetically letting you switch your view in less than a millisecond from urban decay to the Chesapeake Bay.

An April 28, 2015 American Institute of Physics (AIP) news release (also on EurekAlert) by John Arnst, which originated the news item, expands on the theme,

The idea of transparent displays has been around for a few years, but actually creating them from conventional organic light-emitting diodes has proven difficult.

“The transparent part is continuously open to the background,” said Tae-Hoon Yoon, the group’s primary investigator. “As a result, they exhibit poor visibility.”

Light shutters, which use liquid crystals that can be switched between transparent and opaque states by scattering or absorbing the incident light, are one proposed solution to these obstacles, but they come with their own set of problems.

While they do increase the visibility of the displays, light shutters based on scattering can’t provide black color, and light shutters based on absorption can’t completely block the background. They aren’t particularly energy-efficient either, requiring a continuous flow of power in order to maintain their transparent ‘window’ state when not in use. As a final nail in the coffin, they suffer from a frustrating response time to power on and off.

Tae-Hoon Yoon’s group’s new design remedies all of these problems by using scattering and absorption simultaneously. To do this, Yoon’s group fabricated polymer-networked liquid crystals cells doped with dichroic dyes.

In their design, the polymer network structure scatters incident, or oncoming light, which is then absorbed by the dichroic dyes. The light shutters use a parallel pattern of electrodes located above and below the vertically aligned liquid crystals.

When an electric field is applied through the electrodes, the axes of the dye molecules are aligned with that of oncoming light, allowing them to absorb and scatter it. This effectively negates the light coming at the screen from its backside, rendering the display opaque – and the screen’s images fully visible.

“The incident light is absorbed, but we can still see through the background with reduced light intensity,” Yoon said.

In its resting state, this setup lets light pass through, so power need only be applied when you want to switch from transparent window view to opaque monitor view. And because the display’s on-off switch is an electric field, it has a response time of less than one millisecond – far faster than that of contemporary light shutters, which rely on the slow relaxation of liquid crystals for their off-switch.

Future work for Yoon’s group includes respectively increasing and decreasing the device’s transmittance at the transparent and opaque states, as well as developing a bi-stable light shutter which consumes power only when states are being switched, rather than maintained.

Here’s a link to and a citation for the paper,

Fast-switching initially-transparent liquid crystal light shutter with crossed patterned electrodes by Joon Heo, Jae-Won Huh, and Tae-Hoon Yoon.  AIP Advances 5, 047118 (2015); http://dx.doi.org/10.1063/1.4918277 Published April 28, 2015 DOI: 10.1063/1.4918277

This paper is open access.

The researchers have provided an image illustrating the window and the screen,

 Caption: A dye-doped PNLC cell in the transparent and opaque states, placed on a printed sheet of paper. In the transparent state, the clear background image can be seen because of the high transmittance of this cell. In the opaque state, black color is provided and the background image is completely blocked, because the incident light is simultaneously scattered and absorbed. Credit: T.-H.Yoon/Pusan Natl Univ


Caption: A dye-doped PNLC cell in the transparent and opaque states, placed on a printed sheet of paper. In the transparent state, the clear background image can be seen because of the high transmittance of this cell. In the opaque state, black color is provided and the background image is completely blocked, because the incident light is simultaneously scattered and absorbed. Credit: T.-H.Yoon/Pusan Natl Univ