Tag Archives: Suyoun Lee

Analogue memristor for next-generation brain-mimicking (neuromorphic) computing

This research into an analogue memristor comes from The Korea Institute of Science and Technology (KIST) according to a September 20, 2022 news item on Nanowerk, Note: A link has been removed,

Neuromorphic computing system technology mimicking the human brain has emerged and overcome the limitation of excessive power consumption regarding the existing von Neumann computing method. A high-performance, analog artificial synapse device, capable of expressing various synapse connection strengths, is required to implement a semiconductor device that uses a brain information transmission method. This method uses signals transmitted between neurons when a neuron generates a spike signal.

However, considering conventional resistance-variable memory devices widely used as artificial synapses, as the filament grows with varying resistance, the electric field increases, causing a feedback phenomenon, resulting in rapid filament growth. Therefore, it is challenging to implement considerable plasticity while maintaining analog (gradual) resistance variation concerning the filament type.

The Korea Institute of Science and Technology (KIST), led by Dr. YeonJoo Jeong’s team at the Center for Neuromorphic Engineering, solved the limitations of analog synaptic characteristics, plasticity and information preservation, which are chronic obstacles regarding memristors, neuromorphic semiconductor devices. He announced the development of an artificial synaptic semiconductor device capable of highly reliable neuromorphic computing (Nature Communications, “Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing”).

Caption: Concept image of the article Credit: Korea Institute of Science and Technology (KIST)

A September 20, 2022 (Korea) National Research Council of Science & Technology press release on EurekAlert, which originated the news item, delves further into the research,

The KIST research team fine-tuned the redox properties of active electrode ions to solve small synaptic plasticity hindering the performance of existing neuromorphic semiconductor devices. Furthermore, various transition metals were doped and used in the synaptic device, controlling the reduction probability of active electrode ions. It was discovered that the high reduction probability of ions is a critical variable in the development of high-performance artificial synaptic devices.

Therefore, a titanium transition metal, having a high ion reduction probability, was introduced by the research team into an existing artificial synaptic device. This maintains the synapse’s analog characteristics and the device plasticity at the synapse of the biological brain, approximately five times the difference between high and low resistances. Furthermore, they developed a high-performance neuromorphic semiconductor that is approximately 50 times more efficient.

Additionally, due to the high alloy formation reaction concerning the doped titanium transition metal, the information retention increased up to 63 times compared with the existing artificial synaptic device. Furthermore, brain functions, including long-term potentiation and long-term depression, could be more precisely simulated.

The team implemented an artificial neural network learning pattern using the developed artificial synaptic device and attempted artificial intelligence image recognition learning. As a result, the error rate was reduced by more than 60% compared with the existing artificial synaptic device; additionally, the handwriting image pattern (MNIST) recognition accuracy increased by more than 69%. The research team confirmed the feasibility of a high-performance neuromorphic computing system through this improved the artificial synaptic device.

Dr. Jeong of KIST stated, “This study drastically improved the synaptic range of motion and information preservation, which were the greatest technical barriers of existing synaptic mimics.” “In the developed artificial synapse device, the device’s analog operation area to express the synapse’s various connection strengths has been maximized, so the performance of brain simulation-based artificial intelligence computing will be improved.” Additionally, he mentioned, “In the follow-up research, we will manufacture a neuromorphic semiconductor chip based on the developed artificial synapse device to realize a high-performance artificial intelligence system, thereby further enhancing competitiveness in the domestic system and artificial intelligence semiconductor field.”

Here’s a link to and a citation for the paper,

Cluster-type analogue memristor by engineering redox dynamics for high-performance neuromorphic computing by Jaehyun Kang, Taeyoon Kim, Suman Hu, Jaewook Kim, Joon Young Kwak, Jongkil Park, Jong Keuk Park, Inho Kim, Suyoun Lee, Sangbum Kim & YeonJoo Jeong. Nature Communications volume 13, Article number: 4040 (2022) DOI: https://doi.org/10.1038/s41467-022-31804-4 Published: 12 July 2022

This paper is open access.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.