Tag Archives: Bing Chen

A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices

A November 9, 2022 Science China Press press release on EurekAlert announces a new approach to developing neuromorphic (brainlike) devices,

Neuromorphic computing is an information processing model that simulates the efficiency of the human brain with multifunctionality and flexibility. Currently, artificial synaptic devices represented by memristors have been extensively used in neural morphological computing, and different types of neural networks have been developed. However, it is time-consuming and laborious to perform fixing and redeploying of weights stored by traditional artificial synaptic devices. Moreover, synaptic strength is primarily reconstructed via software programming and changing the pulse time, which can result in low efficiency and high energy consumption in neural morphology computing applications.

In a novel research article published in the Beijing-based National Science Review, Prof. Lili Wang from the Chinese Academy of Sciences and her colleagues present a novel hardware neural network based on a tunable flexible MXene energy storage (FMES) system. The system comprises flexible postsynaptic electrodes and MXene nanosheets, which are connected with the presynaptic electrodes using electrolytes. The potential changes in the ion migration process and adsorption in the supercapacitor can simulate information transmission in the synaptic gap. Additionally, the voltage of the FMES system represents the synaptic weight of the connection between two neurons.

Researchers explored the changes of paired-pulse facilitation under different resistance levels to investigate the effect of resistance on the advanced learning and memory behavior of the artificial synaptic system of FMES. The results revealed that the larger the standard deviation, the stronger the memory capacity of the system. In other words, with the continuous improvement of electrical resistance and stimulation time, the memory capacity of the artificial synaptic system of FMES is gradually improved. Therefore, the system can effectively control the accumulation and dissipation of ions by regulating the resistance value in the system without changing the external stimulus, which is expected to realize the coupling of sensing signals and storage weight.

The FMES system can be used to develop neural networks and realize various neural morphological computing tasks, making the recognition accuracy of handwritten digit sets reach 95%. Additionally, the FMES system can simulate the adaptivity of the human brain to achieve adaptive recognition of similar target data sets. Following the training process, the adaptive recognition accuracy can reach approximately 80%, and avoid the time and energy loss caused by recalculation.

“In the future, based on this research, different types of sensors can be integrated on the chip to further realize multimodal sensing computing integrated architecture.” Prof. Lili Wang stated, “The device can perform low-energy calculations, and is expected to solve the problems of high write noise, nonlinear difference, and diffusion under zero bias voltage in certain neural morphological systems.”

Here’s a link to and a citation for the paper,

Neuromorphic-computing-based adaptive learning using ion dynamics in flexible energy storage devices by Shufang Zhao, Wenhao Ran, Zheng Lou, Linlin Li, Swapnadeep Poddar, Lili Wang, Zhiyong Fan, Guozhen Shen. National Science Review, Volume 9, Issue 11, November 2022, nwac158, EOI: https://doi.org/10.1093/nsr/nwac158 Published: 13 August 2022

This paper is open access.

The future (or roadmap for) of Chinese research on neuromorphic engineering

While I was trying (unsuccessfully) to find a copy of the press release on the issuing agency’s website, I found this paper,

2022 roadmap on neuromorphic devices & applications research in China by Qing Wan, Changjin Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Peng Zhou, LinChen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. T. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, Li Qiang Zhu, Jianyu Du, Chen Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su-Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lunyao Wang, Yinshui Xia, Chen Mu, Feng Lin, Chixiao Chen, Bojun Cheng, Yannan Xing, Weitao Zeng, Hong Chen, Lei Yu, Giacomo Indiveri and Ning Qiao. Neuromorphic Computing and Engineering DOI: 10.1088/2634-4386/ac7a5a *Accepted Manuscript online 20 June 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

The paper is open access.

*From the IOP’s Definitions of article versions: Accepted Manuscript is ‘the version of the article accepted for publication including all changes made as a result of the peer review process, and which may also include the addition to the article by IOP of a header, an article ID, a cover sheet and/or an ‘Accepted Manuscript’ watermark, but excluding any other editing, typesetting or other changes made by IOP and/or its licensors’.*

This is neither the published version nor the version of record.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.