Tag Archives: Artificial Neural Networks (ANNs)

Photonic synapses with low power consumption (and a few observations)

This work on brainlike (neuromorphic) computing was announced in a June 30, 2022 Compuscript Ltd news release on EurekAlert,

Photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities

A new publication from Opto-Electronic Advances; DOI 10.29026/oea.2022.210069 discusses how photonic synapses with low power consumption and high sensitivity are expected to integrate sensing-memory-preprocessing capabilities.

Neuromorphic photonics/electronics is the future of ultralow energy intelligent computing and artificial intelligence (AI). In recent years, inspired by the human brain, artificial neuromorphic devices have attracted extensive attention, especially in simulating visual perception and memory storage. Because of its advantages of high bandwidth, high interference immunity, ultrafast signal transmission and lower energy consumption, neuromorphic photonic devices are expected to realize real-time response to input data. In addition, photonic synapses can realize non-contact writing strategy, which contributes to the development of wireless communication. The use of low-dimensional materials provides an opportunity to develop complex brain-like systems and low-power memory logic computers. For example, large-scale, uniform and reproducible transition metal dichalcogenides (TMDs) show great potential for miniaturization and low-power biomimetic device applications due to their excellent charge-trapping properties and compatibility with traditional CMOS processes. The von Neumann architecture with discrete memory and processor leads to high power consumption and low efficiency of traditional computing. Therefore, the sensor-memory fusion or sensor-memory- processor integration neuromorphic architecture system can meet the increasingly developing demands of big data and AI for low power consumption and high performance devices. Artificial synaptic devices are the most important components of neuromorphic systems. The performance evaluation of synaptic devices will help to further apply them to more complex artificial neural networks (ANN).

Chemical vapor deposition (CVD)-grown TMDs inevitably introduce defects or impurities, showed a persistent photoconductivity (PPC) effect. TMDs photonic synapses integrating synaptic properties and optical detection capabilities show great advantages in neuromorphic systems for low-power visual information perception and processing as well as brain memory.

The research Group of Optical Detection and Sensing (GODS) have reported a three-terminal photonic synapse based on the large-area, uniform multilayer MoS2 films. The reported device realized ultrashort optical pulse detection within 5 μs and ultralow power consumption about 40 aJ, which means its performance is much better than the current reported properties of photonic synapses. Moreover, it is several orders of magnitude lower than the corresponding parameters of biological synapses, indicating that the reported photonic synapse can be further used for more complex ANN. The photoconductivity of MoS2 channel grown by CVD is regulated by photostimulation signal, which enables the device to simulate short-term synaptic plasticity (STP), long-term synaptic plasticity (LTP), paired-pulse facilitation (PPF) and other synaptic properties. Therefore, the reported photonic synapse can simulate human visual perception, and the detection wavelength can be extended to near infrared light. As the most important system of human learning, visual perception system can receive 80% of learning information from the outside. With the continuous development of AI, there is an urgent need for low-power and high sensitivity visual perception system that can effectively receive external information. In addition, with the assistant of gate voltage, this photonic synapse can simulate the classical Pavlovian conditioning and the regulation of different emotions on memory ability. For example, positive emotions enhance memory ability and negative emotions weaken memory ability. Furthermore, a significant contrast in the strength of STP and LTP based on the reported photonic synapse suggests that it can preprocess the input light signal. These results indicate that the photo-stimulation and backgate control can effectively regulate the conductivity of MoS2 channel layer by adjusting carrier trapping/detrapping processes. Moreover, the photonic synapse presented in this paper is expected to integrate sensing-memory-preprocessing capabilities, which can be used for real-time image detection and in-situ storage, and also provides the possibility to break the von Neumann bottleneck. 

Here’s a link to and a citation for the paper,

Photonic synapses with ultralow energy consumption for artificial visual perception and brain storage by Caihong Li, Wen Du, Yixuan Huang, Jihua Zou, Lingzhi Luo, Song Sun, Alexander O. Govorov, Jiang Wu, Hongxing Xu, Zhiming Wang. Opto-Electron Adv Vol 5, No 9 210069 (2022). doi: 10.29026/oea.2022.210069

This paper is open access.

Observations

I don’t have much to say about the research itself other than, I believe this is the first time I’ve seen a news release about neuromorphic computing research from China.

it’s China that most interests me, especially these bits from the June 30, 2022 Compuscript Ltd news release on EurekAlert,

Group of Optical Detection and Sensing (GODS) [emphasis mine] was established in 2019. It is a research group focusing on compound semiconductors, lasers, photodetectors, and optical sensors. GODS has established a well-equipped laboratory with research facilities such as Molecular Beam Epitaxy system, IR detector test system, etc. GODS is leading several research projects funded by NSFC and National Key R&D Programmes. GODS have published more than 100 research articles in Nature Electronics, Light: Science and Applications, Advanced Materials and other international well-known high-level journals with the total citations beyond 8000.

Jiang Wu obtained his Ph.D. from the University of Arkansas Fayetteville in 2011. After his Ph.D., he joined UESTC as associate professor and later professor. He joined University College London [UCL] as a research associate in 2012 and then lecturer in the Department of Electronic and Electrical Engineering at UCL from 2015 to 2018. He is now a professor at UESTC [University of Electronic Science and Technology of China] [emphases mine]. His research interests include optoelectronic applications of semiconductor heterostructures. He is a Fellow of the Higher Education Academy and Senior Member of IEEE.

Opto-Electronic Advances (OEA) is a high-impact, open access, peer reviewed monthly SCI journal with an impact factor of 9.682 (Journals Citation Reports for IF 2020). Since its launch in March 2018, OEA has been indexed in SCI, EI, DOAJ, Scopus, CA and ICI databases over the time and expanded its Editorial Board to 36 members from 17 countries and regions (average h-index 49). [emphases mine]

The journal is published by The Institute of Optics and Electronics, Chinese Academy of Sciences, aiming at providing a platform for researchers, academicians, professionals, practitioners, and students to impart and share knowledge in the form of high quality empirical and theoretical research papers covering the topics of optics, photonics and optoelectronics.

The research group’s awkward name was almost certainly developed with the rather grandiose acronym, GODS, in mind. I don’t think you could get away with doing this in an English-speaking country as your colleagues would mock you mercilessly.

It’s Jiang Wu’s academic and work history that’s of most interest as it might provide insight into China’s Young Thousand Talents program. A January 5, 2023 American Association for the Advancement of Science (AAAS) news release describes the program,

In a systematic evaluation of China’s Young Thousand Talents (YTT) program, which was established in 2010, researchers find that China has been successful in recruiting and nurturing high-caliber Chinese scientists who received training abroad. Many of these individuals outperform overseas peers in publications and access to funding, the study shows, largely due to access to larger research teams and better research funding in China. Not only do the findings demonstrate the program’s relative success, but they also hold policy implications for the increasing number of governments pursuing means to tap expatriates for domestic knowledge production and talent development. China is a top sender of international students to United States and European Union science and engineering programs. The YTT program was created to recruit and nurture the productivity of high-caliber, early-career, expatriate scientists who return to China after receiving Ph.Ds. abroad. Although there has been a great deal of international attention on the YTT, some associated with the launch of the U.S.’s controversial China Initiative and federal investigations into academic researchers with ties to China, there has been little evidence-based research on the success, impact, and policy implications of the program itself. Dongbo Shi and colleagues evaluated the YTT program’s first 4 cohorts of scholars and compared their research productivity to that of their peers that remained overseas. Shi et al. found that China’s YTT program successfully attracted high-caliber – but not top-caliber – scientists. However, those young scientists that did return outperformed others in publications across journal-quality tiers – particularly in last-authored publications. The authors suggest that this is due to YTT scholars’ greater access to larger research teams and better research funding in China. The authors say the dearth of such resources in the U.S. and E.U. “may not only expedite expatriates’ return decisions but also motivate young U.S.- and E.U.-born scientists to seek international research opportunities.” They say their findings underscore the need for policy adjustments to allocate more support for young scientists.

Here’s a link to and a citation for the paper,

Has China’s Young Thousand Talents program been successful in recruiting and nurturing top-caliber scientists? by Dongbo Shi, Weichen Liu, and Yanbo Wang. Science 5 Jan 2023 Vol 379, Issue 6627 pp. 62-65 DOI: 10.1126/science.abq1218

This paper is behind a paywall.

Kudos to the folks behind China’s Young Thousands Talents program! Jiang Wu’s career appears to be a prime example of the program’s success. Perhaps Canadian policy makers will be inspired.

Memristor artificial neural network learning based on phase-change memory (PCM)

Caption: Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST. Credit: UNIST

I’m pretty sure that Professor Hongsik Jeong is the one on the right. He seems more relaxed, like he’s accustomed to posing for pictures highlighting his work.

Now on to the latest memristor news, which features the number 8.

For anyone unfamiliar with the term memristor, it’s a device (of sorts) which scientists, involved in neuromorphic computing (computers that operate like human brains), are researching as they attempt to replicate brainlike processes for computers.

From a January 22, 2021 Ulsan National Institute of Science and Technology (UNIST) press release (also on EurekAlert but published March 15, 2021),

An international team of researchers, affiliated with UNIST has unveiled a novel technology that could improve the learning ability of artificial neural networks (ANNs).

Professor Hongsik Jeong and his research team in the Department of Materials Science and Engineering at UNIST, in collaboration with researchers from Tsinghua University in China, proposed a new learning method to improve the learning ability of ANN chips by challenging its instability.

Artificial neural network chips are capable of mimicking the structural, functional and biological features of human neural networks, and thus have been considered the technology of the future. In this study, the research team demonstrated the effectiveness of the proposed learning method by building phase change memory (PCM) memristor arrays that operate like ANNs. This learning method is also advantageous in that its learning ability can be improved without additional power consumption, since PCM undergoes a spontaneous resistance increase due to the structural relaxation after amorphization.

ANNs, like human brains, use less energy even when performing computation and memory tasks, simultaneously. However, the artificial neural network chip in which a large number of physical devices are integrated has a disadvantage that there is an error. The existing artificial neural network learning method assumes a perfect artificial neural network chip with no errors, so the learning ability of the artificial neural network is poor.

The research team developed a memristor artificial neural network learning method based on a phase-change memory, conceiving that the real human brain does not require near-perfect motion. This learning method reflects the “resistance drift” (increased electrical resistance) of the phase change material in the memory semiconductor in learning. During the learning process, since the information update pattern is recorded in the form of increasing electrical resistance in the memristor, which serves as a synapse, the synapse additionally learns the association between the pattern it changes and the data it is learning.

The research team showed that the learning method developed through an experiment to classify handwriting composed of numbers 0-9 has an effect of improving learning ability by about 3%. In particular, the accuracy of the number 8, which is difficult to classify handwriting, has improved significantly. [emphasis mine] The learning ability improved thanks to the synaptic update pattern that changes differently according to the difficulty of handwriting classification.

Researchers expect that their findings are expected to promote the learning algorithms with the intrinsic properties of memristor devices, opening a new direction for development of neuromorphic computing chips.

Here’s a link to and a citation for the paper,

Spontaneous sparse learning for PCM-based memristor neural networks by Dong-Hyeok Lim, Shuang Wu, Rong Zhao, Jung-Hoon Lee, Hongsik Jeong & Luping Shi. Nature Communications volume 12, Article number: 319 (2021) DOI: https://doi.org/10.1038/s41467-020-20519-z Published 12 January 2021

This paper is open access.

Is it time to invest in a ‘brain chip’ company?

This story take a few twists and turns. First, ‘brain chips’ as they’re sometimes called would allow, theoretically, computers to learn and function like human brains. (Note: There’s another type of ‘brain chip’ which could be implanted in human brains to help deal with diseases such as Parkinson’s and Alzheimer’s. *Today’s [June 26, 2015] earlier posting about an artificial neuron points at some of the work being done in this areas.*)

Returning to the ‘brain ship’ at hand. Second, there’s a company called BrainChip, which has one patent and another pending for, yes, a ‘brain chip’.

The company, BrainChip, founded in Australia and now headquartered in California’s Silicon Valley, recently sparked some investor interest in Australia. From an April 7, 2015 article by Timna Jacks for the Australian Financial Review,

Former mining stock Aziana Limited has whet Australian investors’ appetite for science fiction, with its share price jumping 125 per cent since it announced it was acquiring a US-based tech company called BrainChip, which promises artificial intelligence through a microchip that replicates the neural system of the human brain.

Shares in the company closed at 9¢ before the Easter long weekend, having been priced at just 4¢ when the backdoor listing of BrainChip was announced to the market on March 18.

Creator of the patented digital chip, Peter Van Der Made told The Australian Financial Review the technology has the capacity to learn autonomously, due to its composition of 10,000 biomimic neurons, which, through a process known as synaptic time-dependent plasticity, can form memories and associations in the same way as a biological brain. He said it works 5000 times faster and uses a thousandth of the power of the fastest computers available today.

Mr Van Der Made is inviting technology partners to license the technology for their own chips and products, and is donating the technology to university laboratories in the US for research.

The Netherlands-born Australian, now based in southern California, was inspired to create the brain-like chip in 2004, after working at the IBM Internet Security Systems for two years, where he was chief scientist for behaviour analysis security systems. …

A June 23, 2015 article by Tony Malkovic on phys.org provide a few more details about BrainChip and about the deal,

Mr Van der Made and the company, also called BrainChip, are now based in Silicon Valley in California and he returned to Perth last month as part of the company’s recent merger and listing on the Australian Stock Exchange.

He says BrainChip has the ability to learn autonomously, evolve and associate information and respond to stimuli like a brain.

Mr Van der Made says the company’s chip technology is more than 5,000 faster than other technologies, yet uses only 1/1,000th of the power.

“It’s a hardware only solution, there is no software to slow things down,” he says.

“It doesn’t executes instructions, it learns and supplies what it has learnt to new information.

“BrainChip is on the road to position itself at the forefront of artificial intelligence,” he says.

“We have a clear advantage, at least 10 years, over anybody else in the market, that includes IBM.”

BrainChip is aiming at the global semiconductor market involving almost anything that involves a microprocessor.

You can find out more about the company, BrainChip here. The site does have a little more information about the technology,

Spiking Neuron Adaptive Processor (SNAP)

BrainChip’s inventor, Peter van der Made, has created an exciting new Spiking Neural Networking technology that has the ability to learn autonomously, evolve and associate information just like the human brain. The technology is developed as a digital design containing a configurable “sea of biomimic neurons’.

The technology is fast, completely digital, and consumes very low power, making it feasible to integrate large networks into portable battery-operated products, something that has never been possible before.

BrainChip neurons autonomously learn through a process known as STDP (Synaptic Time Dependent Plasticity). BrainChip’s fully digital neurons process input spikes directly in hardware. Sensory neurons convert physical stimuli into spikes. Learning occurs when the input is intense, or repeating through feedback and this is directly correlated to the way the brain learns.

Computing Artificial Neural Networks (ANNs)

The brain consists of specialized nerve cells that communicate with one another. Each such nerve cell is called a Neuron,. The inputs are memory nodes called synapses. When the neuron associates information, it produces a ‘spike’ or a ‘spike train’. Each spike is a pulse that triggers a value in the next synapse. Synapses store values, similar to the way a computer stores numbers. In combination, these values determine the function of the neural network. Synapses acquire values through learning.

In Artificial Neural Networks (ANNs) this complex function is generally simplified to a static summation and compare function, which severely limits computational power. BrainChip has redefined how neural networks work, replicating the behaviour of the brain. BrainChip’s artificial neurons are completely digital, biologically realistic resulting in increased computational power, high speed and extremely low power consumption.

The Problem with Artificial Neural Networks

Standard ANNs, running on computer hardware are processed sequentially; the processor runs a program that defines the neural network. This consumes considerable time and because these neurons are processed sequentially, all this delayed time adds up resulting in a significant linear decline in network performance with size.

BrainChip neurons are all mapped in parallel. Therefore the performance of the network is not dependent on the size of the network providing a clear speed advantage. So because there is no decline in performance with network size, learning also takes place in parallel within each synapse, making STDP learning very fast.

A hardware solution

BrainChip’s digital neural technology is the only custom hardware solution that is capable of STDP learning. The hardware requires no coding and has no software as it evolves learning through experience and user direction.

The BrainChip neuron is unique in that it is completely digital, behaves asynchronously like an analog neuron, and has a higher level of biological realism. It is more sophisticated than software neural models and is many orders of magnitude faster. The BrainChip neuron consists entirely of binary logic gates with no traditional CPU core. Hence, there are no ‘programming’ steps. Learning and training takes the place of programming and coding. Like of a child learning a task for the first time.

Software ‘neurons’, to compromise for limited processing power, are simplified to a point where they do not resemble any of the features of a biological neuron. This is due to the sequential nature of computers, whereby all data has to pass through a central processor in chunks of 16, 32 or 64 bits. In contrast, the brain’s network is parallel and processes the equivalent of millions of data bits simultaneously.

A significantly faster technology

Performing emulation in digital hardware has distinct advantages over software. As software is processed sequentially, one instruction at a time, Software Neural Networks perform slower with increasing size. Parallel hardware does not have this problem and maintains the same speed no matter how large the network is. Another advantage of hardware is that it is more power efficient by several orders of magnitude.

The speed of the BrainChip device is unparalleled in the industry.

For large neural networks a GPU (Graphics Processing Unit) is ~70 times faster than the Intel i7 executing a similar size neural network. The BrainChip neural network is faster still and takes far fewer CPU (Central Processing Unit) cycles, with just a little communication overhead, which means that the CPU is available for other tasks. The BrainChip network also responds much faster than a software network accelerating the performance of the entire system.

The BrainChip network is completely parallel, with no sequential dependencies. This means that the network does not slow down with increasing size.

Endorsed by the neuroscience community

A number of the world’s pre-eminent neuroscientists have endorsed the technology and are agreeing to joint develop projects.

BrainChip has the potential to become the de facto standard for all autonomous learning technology and computer products.

Patented

BrainChip’s autonomous learning technology patent was granted on the 21st September 2008 (Patent number US 8,250,011 “Autonomous learning dynamic artificial neural computing device and brain inspired system”). BrainChip is the only company in the world to have achieved autonomous learning in a network of Digital Neurons without any software.

A prototype Spiking Neuron Adaptive Processor was designed as a ‘proof of concept’ chip.

The first tests were completed at the end of 2007 and this design was used as the foundation for the US patent application which was filed in 2008. BrainChip has also applied for a continuation-in-part patent filed in 2012, the “Method and System for creating Dynamic Neural Function Libraries”, US Patent Application 13/461,800 which is pending.

Van der Made doesn’t seem to have published any papers on this work and the description of the technology provided on the website is frustratingly vague. There are many acronyms for processes but no mention of what this hardware might be. For example, is it based on a memristor or some kind of atomic ionic switch or something else altogether?

It would be interesting to find out more but, presumably, van der Made, wishes to withhold details. There are many companies following the same strategy while pursuing what they view as a business advantage.

* Artificial neuron link added June 26, 2015 at 1017 hours PST.