Tag Archives: artificial neural network (ANN)

Adaptive neural connectivity with an event-based architecture using photonic processors

On first glance it looked like a set of matches. If there were more dimension, this could also have been a set pencils but no,

Caption: The chip contains almost 8,400 functioning artificial neurons from waveguide-coupled phase-change material. The researchers trained this neural network to distinguish between German and English texts on the basis of vowel frequency. Credit: Jonas Schütte / Pernice Group Courtesy: University of Münster

An October 23, 2023 news item on Nanowerk introduces research into a new approach to optical neural networks

A team of researchers headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster, has developed a so-called event-based architecture, using photonic processors. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network.

Key Takeaways

Researchers have created a new computing architecture that mimics biological neural networks, using photonic processors for data transportation and processing.

The new system enables continuous adaptation of connections within the neural network, crucial for learning processes. This is known as both synaptic and structural plasticity.

Unlike traditional studies, the connections or synapses in this photonic neural network are not hardware-based but are coded based on optical pulse properties, allowing for a single chip to hold several thousand neurons.

Light-based processors in this system offer a much higher bandwidth and lower energy consumption compared to traditional electronic processors.

The researchers successfully tested the system using an evolutionary algorithm to differentiate between German and English texts based on vowel count, highlighting its potential for rapid and energy-efficient AI applications.

The Research

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits.

The person who edited the original press release, which is included in the news item in the above, is not credited.

Here’s the unedited original October 23, 2023 University of Münster press release (also on EurekAlert)

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes. For the purposes of the study, a team working at Collaborative Research Centre 1459 (“Intelligent Matter”) – headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster – joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal “Science Advances”.

What is needed for a neural network in machine learning are artificial neurons which are activated by external excitatory signals, and which have connections to other neurons. The connections between these artificial neurons are called synapses – just like the biological original. For their study, the team of researchers in Münster used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material, and the team showed that the connection between two each of these neurons can indeed become stronger or weaker (synaptic plasticity), and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses – in other words, as a result of the respective wavelength and of the intensity of the optical pulse. This made it possible to integrate several thousand neurons on one single chip and connect them optically.

In comparison with traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks, and with lower energy consumption. This new approach consists of basic research. “Our aim is to develop an optical computing architecture which in the long term will make it possible to compute AI applications in a rapid and energy-efficient way,” says Frank Brückerhoff-Plückelmann, one of the lead authors.

Methodology: The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts. The recognition parameter they used was the number of vowels in the text.

The researchers received financial support from the German Research Association, the European Commission and “UK Research and Innovation”.

Here’s a link to and a citation for the paper,

Event-driven adaptive optical neural network by Frank Brückerhoff-Plückelmann, Ivonne Bente, Marlon Becker, Niklas Vollmar, Nikolaos Farmakidis, Emma Lomonte, Francesco Lenzini, C. David Wright, Harish Bhaskaran, Martin Salinga, Benjamin Risse, and Wolfram H. P. Pernice. Science Advances 20 Oct 2023 Vol 9, Issue 42 DOI: 10.1126/sciadv.adi9127

This paper is open access.

Consciousness, energy, and matter

Credit: Rice University [downloaded from https://phys.org/news/2023-10-energy-consciousness-physics-thorny-topic.html]

There’s an intriguing approach tying together ideas about consciousness, artificial intelligence, and physics in an October 8, 2023 news item on phys.org,

With the rise of brain-interface technology and artificial intelligence that can imitate brain functions, understanding the nature of consciousness and how it interacts with reality is not just an age-old philosophical question but also a salient challenge for humanity.

An October 9, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert but published on October 8, 2023), which originated the news item, delves further into the subject matter, Note: Links have been removed,

Can AI become conscious, and how would we know? Should we incorporate human or animal cells, such as neurons, into machines and robots? Would they be conscious and have subjective experiences? Does consciousness reduce to physicalism, or is it fundamental? And if machine-brain interaction influenced you to commit a crime, or caused a crime, would you be responsible beyond a reasonable doubt? Do we have a free will?

AI and computer science specialist Dr Mahendra Samarawickrama, winner of the Australian Computer Society’s Information and Communications Technology (ICT) Professional of the year, has applied his knowledge of physics and artificial neural networks to this thorny topic.

He presented a peer-reviewed paper on fundamental physics and consciousness at the 11th International Conference on Mathematical Modelling in Physical Sciences, Unifying Matter, Energy and Consciousness, which has just been published in the AIP (the American Institute of Physics) Conference Proceedings. 

“Consciousness is an evolving topic connected to physics, engineering, neuroscience and many other fields. Understanding the interplay between consciousness, energy and matter could bring important insights to our fundamental understanding of reality,” said Dr Samarawickrama.

“Einstein’s dream of a unified theory is a quest that occupies the minds of many theoretical physicists and engineers. Some solutions completely change existing frameworks, which increases complexity and creates more problems than it solves.

“My theory brings the notion of consciousness to fundamental physics such that it complements the current physics models and explains the time, causality, and interplay of consciousness, energy and matter.

“I propose that consciousness is a high-speed sequential flow of awareness subjected to relativity. The quantised energy of consciousness can interplay with matter creating reality while adhering to laws of physics, including quantum physics and relativity.

“Awareness can be seen in life, AI and even physical realities like entangled particles. Studying consciousness helps us be aware of and differentiate realities that exist in nature,” he said. 

Dr Samarawickrama is an honorary Visiting Scholar in the School of Computer Science at the University of Technology Sydney, where he has contributed to UTS research on data science and AI, focusing on social impact.

“Research in this field could pave the way towards the development of conscious AI, with robots that are aware and have the ability to think becoming a reality. We want to ensure that artificial intelligence is ethical and responsible in emerging solutions,” Dr Samarawickrama said.

Here’s a link to and a citation for the paper Samarawickrama presented at the 11th International Conference on Mathematical Modelling in Physical Sciences, Unifying Matter, Energy and Consciousness,

Unifying matter, energy and consciousness by Mahendra Samarawickrama. AIP Conf. Proc. Volume 2872, Issue 1, 28 September 2023, 110001 (2023) DOI: https://doi.org/10.1063/5.0162815

This paper is open access.

The researcher has made a video of his presentation and further information available,

It’s a little bit over my head but hopefully repeated viewings and readings will help me better understand Dr. Samarawickrama’s work.

Sleep helps artificial neural networks (ANNs) to keep learning without “catastrophic forgetting”

A November 18, 2022 news item on phys.org describes some of the latest work on neuromorphic (brainlike) computing from the University of California at San Diego (UCSD or UC San Diego), Note: Links have been removed,

Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, Ph.D., professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. 

A November 18, 2022 UC San Diego news release (also one EurekAlert), which originated the news item, adds some technical details,

The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, “sleep” for the networks allowed them to replay old memories without explicitly using old training data. 

Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons. 

“When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay. 

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting. 

“It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory. 

“In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

Here’s a link to and a citation for the paper,

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation by Ryan Golden, Jean Erik Delanois, Pavel Sanda, Maxim Bazhenov. PLOS [Computational Biology] DOI: https://doi.org/10.1371/journal.pcbi.1010628 Published: November 18, 2022

This paper is open access.

Transforming bacterial cells into living computers

If this were a movie instead of a press release, we’d have some ominous music playing over a scene in a pristine white lab. Instead, we have a November 13, 2022 Technion-Israel Institute of Technology press release (also on EurekAlert) where the writer tries to highlight the achievement while downplaying the sort of research (in synthetic biology) that could have people running for the exits,

Bringing together concepts from electrical engineering and bioengineering tools, Technion and MIT [Massachusetts Institute of Technology] scientists collaborated to produce cells engineered to compute sophisticated functions – “biocomputers” of sorts. Graduate students and researchers from Technion – Israel Institute of Technology Professor Ramez Daniel’s Laboratory for Synthetic Biology & Bioelectronics worked together with Professor Ron Weiss from the Massachusetts Institute of Technology to create genetic “devices” designed to perform computations like artificial neural circuits. Their results were recently published in Nature Communications.

The genetic material was inserted into the bacterial cell in the form of a plasmid: a relatively short DNA molecule that remains separate from the bacteria’s “natural” genome. Plasmids also exist in nature, and serve various functions. The research group designed the plasmid’s genetic sequence to function as a simple computer, or more specifically, a simple artificial neural network. This was done by means of several genes on the plasmid regulating each other’s activation and deactivation according to outside stimuli.

What does it mean that a cell is a circuit? How can a computer be biological?

At its most basic level, a computer consists of 0s and 1s, of switches. Operations are performed on these switches: summing them, picking the maximal or minimal value between them, etc. More advanced operations rely on the basic ones, allowing a computer to play chess or fly a rocket to the moon.

In the electronic computers we know, the 0/1 switches take the form of transistors. But our cells are also computers, of a different sort. There, the presence or absence of a molecule can act as a switch. Genes activate, trigger or suppress other genes, forming, modifying, or removing molecules. Synthetic biology aims (among other goals) to harness these processes, to synthesize the switches and program the genes that would make a bacterial cell perform complex tasks. Cells are naturally equipped to sense chemicals and to produce organic molecules. Being able to “computerize” these processes within the cell could have major implications for biomanufacturing and have multiple medical applications.

The Ph.D students (now doctors) Luna Rizik and Loai Danial, together with Dr. Mouna Habib, under the guidance of Prof. Ramez Daniel from the Faculty of Biomedical Engineering at the Technion, and in collaboration with Prof. Ron Weiss from the Synthetic Biology Center, MIT,  were inspired by how artificial neural networks function. They created synthetic computation circuits by combining existing genetic “parts,” or engineered genes, in novel ways, and implemented concepts from neuromorphic electronics into bacterial cells. The result was the creation of bacterial cells that can be trained using artificial intelligence algorithms.

The group were able to create flexible bacterial cells that can be dynamically reprogrammed to switch between reporting whether at least one of a test chemicals, or two, are present (that is, the cells were able to switch between performing the OR and the AND functions). Cells that can change their programming dynamically are capable of performing different operations under different conditions. (Indeed, our cells do this naturally.) Being able to create and control this process paves the way for more complex programming, making the engineered cells suitable for more advanced tasks. Artificial Intelligence algorithms allowed the scientists to produce the required genetic modifications to the bacterial cells at a significantly reduced time and cost.

Going further, the group made use of another natural property of living cells: they are capable of responding to gradients. Using artificial intelligence algorithms, the group succeeded in harnessing this natural ability to make an analog-to-digital converter – a cell capable of reporting whether the concentration of a particular molecule is “low”, “medium”, or “high.” Such a sensor could be used to deliver the correct dosage of medicaments, including cancer immunotherapy and diabetes drugs.

Of the researchers working on this study, Dr. Luna Rizik and Dr. Mouna Habib hail from the Department of Biomedical Engineering, while Dr. Loai Danial is from the Andrew and Erna Viterbi Faculty of Electrical Engineering. It is bringing the two fields together that allowed the group to make the progress they did in the field of synthetic biology.

This work was partially funded by the Neubauer Family Foundation, the Israel Science Foundation (ISF), European Union’s Horizon 2020 Research and Innovation Programme, the Technion’s Lorry I. Lokey interdisciplinary Center for Life Sciences and Engineering, and the [US Department of Defense] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Synthetic neuromorphic computing in living cells by Luna Rizik, Loai Danial, Mouna Habib, Ron Weiss & Ramez Daniel. Nature Communications volume 13, Article number: 5602 (2022) DOIL https://doi.org/10.1038/s41467-022-33288-8 Published: 24 September 2022

This paper is open access.

Even AI can make mistakes. So, how do you fix an AI neural network?

It seems that not only can an artificial neural network be mistaken but that we should figure out how to ‘change its mind’. From a February 10, 2022 Singapore Management University (SMU) press release (also on EurekAlert but published on April 12, 2022) by Alvin Lee announces how one researcher’s project will tackle the problem (Note: Links have been removed),

For the longest time, owners of Tesla cars have complained about “phantom braking”, the phenomenon of their vehicles suddenly stopping in response to imagined hazards of oncoming traffic or stationary objects on the roads. Yet when the company recalled a version of its Full Self-Driving software in October 2021, complaints over “phantom braking” jumped to 107 over the next three months compared to just 34 in the preceding 22 months.

Tesla’s troubles, which included the recent 54,000-vehicle recall over disobeying ‘Stop’ signs, underline the difficulty in fixing the neural networks that power the self-driving artificial intelligence (AI) systems. At their core, neural networks are fundamentally unlike human-written if-then-else computer programs that can be picked apart and fixed, line by line.

“Neural networks don’t work that way,” observes Sun Jun, Professor of Computer Science at Singapore Management University (SMU). “Even if we see a wrong result, you have no idea what’s going on. Furthermore, if I see that there’s a security threat, how do I patch the system so that it’s secure?”

The project                                                              

That issue forms the core of Professor Sun’s project “The Science of Certified AI Systems”, for which he has secured an MOE [Ministry of Education] Academic Research Funding Tier 3 grant. The project aims to develop:

A scientific foundation for analysing AI systems;

A set of effective tools for analysing and repairing neural networks; and

Certification standards which provide actionable guidelines.

One area the project seeks to address is the robustness of AI systems, which Professor Sun illustrates with a simple example.

“If I have a face recognition software, and we feed it a picture of Barack Obama, the AI system should identify it as Barack Obama. If I change just one or two pixels on an image, to the human eye it doesn’t change anything. But to the neural network, suddenly it might identify the image as that of Donald Trump, nothing like the original picture.

“Just by the difference of one pixel, you can change the label. That’s a problem of robustness.”

The traditional way of addressing such a problem in computer programs would be through looking at each line of code to find a causal chain to establish causality, and then fix the code. In neural networks, because countless neurons or nodes interact with one another to produce the final result, it is near impossible to accurately identify a single neuron as the cause for a wrong result.

“In the case of neural networks, every neuron participated in producing a result. So you can basically say, ‘With this wrong result, every neuron is responsible.’ explains Professor Sun, who is also the Deputy Director of SMU’s Research Lab for Intelligent Software Engineering.

“[In this project] We try to measure which neurons are more responsible for producing this outcome, and then trace back to the ones that are impacting the final probability distribution more. In the end I could say, ‘These neurons are consistently and significantly contributing to the wrong results’ and that might be the most important neurons that we should look at. If you change these neurons, maybe it will somehow fix the output.”

Solving problems

While more organisations are using or exploring to use AI and neural networks to enhance their performance, the costs involved often lead decision makers to adopt an open-source solution instead of building it from scratch.

Professor Sun notes that it is “pretty easy to basically embed some malicious neurons into a neural network (i.e., a Trojan horse)”. In the case of facial recognition software, it could be easily tricked with what is known as a ‘backdoor’ to recognise an unauthorised person as someone within the organisation.

How then can the project help address such issues and challenges?

“We’ll be producing a set of software toolkits to tell you whether your neural network is robust, whether it may potentially contain backdoors,” Professor Sun tells the Office of Research and Tech Transfer. “Or we can certify your neural network is free of certain attacks.

“Another way would be fixing neural networks. We could produce software such that if you give me a neural network and suspected security problems it might have, I could make your neural network more robust and secure.”

Professor Sun reveals that a global technology company has been in touch with him to fix its neural networks. The dream, he says, is to create a “whole framework for developing neural networks and AI systems in general so that you can build your robust, secure AI systems on top of our fundamental framework”.

People matter

Despite the newfangled technology attracting all the attention, Professor Sun singled out the human aspect that often gets lost in such discussions.

“All these neural networks are trained on data collected by humans,” Professor Sun points out. “If we want to be able to develop the AI systems which are indeed secure, we have to look at the process as well. We must tell our human experts how to collect data, clean data, test the system, follow rigid protocols. This will help us to eliminate human errors.”

I’m fascinated by the inclusion of this image with the press release,

Caption: How do you identify which neurons are responsible for a neural network’s wrong outputs? SMU Professor Sun Jun’s latest project aims to address that issue and fix it. Credit: Singapore Management University

Why is Professor Sun Jun in front of the Louvre? Is there something about this image that hearkens back to errors in an artificial neural network? Or, perhaps it was just a nice picture.

Art appraised by algorithm

Artificial intelligence has been introduced to art appraisals and auctions by way of an academic research project. A January 27, 2022 University of Luxembourg press release (also on EurekAlert but published February 2, 2022) announces the research, Note: Links have been removed,

Does artificial intelligence have a place in such a fickle and quirky environment as the secondary art market? Can an algorithm learn to predict the value assigned to an artwork at auction?

These questions, among others, were analysed by a group of researchers including Roman Kräussl, professor at the Department of Finance at the University of Luxembourg and co-authors Mathieu Aubry (École des Ponts ParisTech), Gustavo Manso (Haas School of Business, University of California at Berkeley), and Christophe Spaenjers (HEC Paris). The resulting paper, Biased Auctioneers, has been accepted for publication in the top-ranked Journal of Finance.

Training a neural network to appraise art 

In this study, which combines fields of finance and computer science, researchers used machine learning and artificial intelligence to create a neural network algorithm that mimics the work of human appraisers by generating price predictions for art at auction. This algorithm relies on data using both visual and non-visual characteristics of artwork. The authors of this study unleashed their algorithm on a vast set of art sales data capturing 1.2 million painting auctions from 2008 to 2014, training the neural network with both an image of the artwork, and information such as the artist, the medium and the auction house where the work was sold. Once trained to this dataset, the authors asked the neural network to predict the auction house pre-sale estimates, ‘buy-in’ price (the minimum price at which the work will be sold), as well as the final auction price for art sales in the year 2015. It became then possible to compare the algorithm’s estimate with the real-word data, and determine whether the relative level of the machine-generated price predictions predicts relative price outcomes.

The path towards a more efficient market?

Not too surprisingly, the human experts’ predications [sic] were more accurate than the algorithm, whose prediction, in turn, was more accurate than the standard linear hedonic model which researchers used to benchmark the study. Reasons for the discrepancy between human and machine include, as the authors argue, mainly access to a larger amount of information about the individual works of art including provenance, condition and historical context. Although interesting, the authors’ goal was not to pit human against machine on this specific task. On the contrary, the authors aimed at discovering the usefulness and potential applications of machine-based valuations. For example, using such an algorithm, it may be possible to determine whether an auctioneer’s pre-sale valuations are too pessimistic or too optimistic, effectively predicting the prediction errors of the auctioneers. Ultimately, this information could be used to correct for these kinds of man-made market inefficiencies.

Beyond the auction block

The implications of this methodology and the applied computational power, however, is not limited to the art world. Other markets trading in ‘real’ assets, which rely heavily on human appraisers, namely the real estate market, can benefit from the research. While AI is not likely to replace humans just yet, machine-learning technology as demonstrated by the researchers may become an important tool for investors and intermediaries, who wish to gain access to as much information, as quickly and as cheaply as possible.

Here’s a link to and a citation for the paper,

Biased Auctioneers by Mathieu Aubry, Roman Kräussl, Gustavo Manso, and Christophe Spaenjers. Journal of Finance, Forthcoming [print issue], Available at SSRN: https://ssrn.com/abstract=3347175 or http://dx.doi.org/10.2139/ssrn.3347175 Published online: January 6, 2022

This paper appears to be open access online and was last revised on January 13, 2022.

China’s neuromorphic chips: Darwin and Tianjic

I believe that China has more than two neuromorphic chips. The two being featured here are the ones for which I was easily able to find information.

The Darwin chip

The first information (that I stumbled across) about China and a neuromorphic chip (Darwin) was in a December 22, 2015 Science China Press news release on EurekAlert,

Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes. It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.

With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today’s society, providing many services and convenience to people’s lives, but they also raise challenges of running complex intelligent algorithms on small devices. Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embeddeddevices. It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses.Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.

The successful development ofDarwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user. Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others.Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains. As a prototype application in Brain-Computer Interfaces, Figure 2 [not included here] describes an application example ofrecognizingthe user’s motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment. Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification result.

The most recent development for this chip was announced in a September 2, 2019 Zhejiang University press release (Note: Links have been removed),

The second generation of the Darwin Neural Processing Unit (Darwin NPU 2) as well as its corresponding toolchain and micro-operating system was released in Hangzhou recently. This research was led by Zhejiang University, with Hangzhou Dianzi University and Huawei Central Research Institute participating in the development and algorisms of the chip. The Darwin NPU 2 can be primarily applied to smart Internet of Things (IoT). It can support up to 150,000 neurons and has achieved the largest-scale neurons on a nationwide basis.

The Darwin NPU 2 is fabricated by standard 55nm CMOS technology. Every “neuromorphic” chip is made up of 576 kernels, each of which can support 256 neurons. It contains over 10 million synapses which can construct a powerful brain-inspired computing system.

“A brain-inspired chip can work like the neurons inside a human brain and it is remarkably unique in image recognition, visual and audio comprehension and naturalistic language processing,” said MA De, an associate professor at the College of Computer Science and Technology on the research team.

“In comparison with traditional chips, brain-inspired chips are more adept at processing ambiguous data, say, perception tasks. Another prominent advantage is their low energy consumption. In the process of information transmission, only those neurons that receive and process spikes will be activated while other neurons will stay dormant. In this case, energy consumption can be extremely low,” said Dr. ZHU Xiaolei at the School of Microelectronics.

To cater to the demands for voice business, Huawei Central Research Institute designed an efficient spiking neural network algorithm in accordance with the defining feature of the Darwin NPU 2 architecture, thereby increasing computing speeds and improving recognition accuracy tremendously.

Scientists have developed a host of applications, including gesture recognition, image recognition, voice recognition and decoding of electroencephalogram (EEG) signals, on the Darwin NPU 2 and reduced energy consumption by at least two orders of magnitude.

In comparison with the first generation of the Darwin NPU which was developed in 2015, the Darwin NPU 2 has escalated the number of neurons by two orders of magnitude from 2048 neurons and augmented the flexibility and plasticity of the chip configuration, thus expanding the potential for applications appreciably. The improvement in the brain-inspired chip will bring in its wake the revolution of computer technology and artificial intelligence. At present, the brain-inspired chip adopts a relatively simplified neuron model, but neurons in a real brain are far more sophisticated and many biological mechanisms have yet to be explored by neuroscientists and biologists. It is expected that in the not-too-distant future, a fascinating improvement on the Darwin NPU 2 will come over the horizon.

I haven’t been able to find a recent (i.e., post 2017) research paper featuring Darwin but there is another chip and research on that one was published in July 2019. First, the news.

The Tianjic chip

A July 31, 2019 article in the New York Times by Cade Metz describes the research and offers what seems to be a jaundiced perspective about the field of neuromorphic computing (Note: A link has been removed),

As corporate giants like Ford, G.M. and Waymo struggle to get their self-driving cars on the road, a team of researchers in China is rethinking autonomous transportation using a souped-up bicycle.

This bike can roll over a bump on its own, staying perfectly upright. When the man walking just behind it says “left,” it turns left, angling back in the direction it came.

It also has eyes: It can follow someone jogging several yards ahead, turning each time the person turns. And if it encounters an obstacle, it can swerve to the side, keeping its balance and continuing its pursuit.

… Chinese researchers who built the bike believe it demonstrates the future of computer hardware. It navigates the world with help from what is called a neuromorphic chip, modeled after the human brain.

Here’s a video, released by the researchers, demonstrating the chip’s abilities,

Now back to back to Metz’s July 31, 2019 article (Note: A link has been removed),

The short video did not show the limitations of the bicycle (which presumably tips over occasionally), and even the researchers who built the bike admitted in an email to The Times that the skills on display could be duplicated with existing computer hardware. But in handling all these skills with a neuromorphic processor, the project highlighted the wider effort to achieve new levels of artificial intelligence with novel kinds of chips.

This effort spans myriad start-up companies and academic labs, as well as big-name tech companies like Google, Intel and IBM. And as the Nature paper demonstrates, the movement is gaining significant momentum in China, a country with little experience designing its own computer processors, but which has invested heavily in the idea of an “A.I. chip.”

If you can get past what seems to be a patronizing attitude, there are some good explanations and cogent criticisms in the piece (Metz’s July 31, 2019 article, Note: Links have been removed),

… it faces significant limitations.

A neural network doesn’t really learn on the fly. Engineers train a neural network for a particular task before sending it out into the real world, and it can’t learn without enormous numbers of examples. OpenAI, a San Francisco artificial intelligence lab, recently built a system that could beat the world’s best players at a complex video game called Dota 2. But the system first spent months playing the game against itself, burning through millions of dollars in computing power.

Researchers aim to build systems that can learn skills in a manner similar to the way people do. And that could require new kinds of computer hardware. Dozens of companies and academic labs are now developing chips specifically for training and operating A.I. systems. The most ambitious projects are the neuromorphic processors, including the Tianjic chip under development at Tsinghua University in China.

Such chips are designed to imitate the network of neurons in the brain, not unlike a neural network but with even greater fidelity, at least in theory.

Neuromorphic chips typically include hundreds of thousands of faux neurons, and rather than just processing 1s and 0s, these neurons operate by trading tiny bursts of electrical signals, “firing” or “spiking” only when input signals reach critical thresholds, as biological neurons do.

Tiernan Ray’s August 3, 2019 article about the chip for ZDNet.com offers some thoughtful criticism with a side dish of snark (Note: Links have been removed),

Nature magazine’s cover story [July 31, 2019] is about a Chinese chip [Tianjic chip]that can run traditional deep learning code and also perform “neuromorophic” operations in the same circuitry. The work’s value seems obscured by a lot of hype about “artificial general intelligence” that has no real justification.

The term “artificial general intelligence,” or AGI, doesn’t actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to “think” like a person.

Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week’s Nature magazine about a new kind of computer chip developed by researchers at China’s Tsinghua University that could “accelerate the development of AGI,” they claim.

The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it’s made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.

“This paper is an example of the good work that China is doing in AI,” says Linley Gwennap, longtime chip-industry observer and principal analyst with chip analysis firm The Linley Group. “But this particular idea isn’t going to take over the world.”

The premise of the paper, “Towards artificial general intelligence with hybrid Tianjic chip architecture,” is that to achieve AGI, computer chips need to change. That’s an idea supported by fervent activity these days in the land of computer chips, with lots of new chip designs being proposed specifically for machine learning.

The Tsinghua authors specifically propose that the mainstream machine learning of today needs to be merged in the same chip with what’s called “neuromorphic computing.” Neuromorphic computing, first conceived by Caltech professor Carver Mead in the early ’80s, has been an obsession for firms including IBM for years, with little practical result.

[Missing details about the chip] … For example, the part is said to have “reconfigurable” circuits, but how the circuits are to be reconfigured is never specified. It could be so-called “field programmable gate array,” or FPGA, technology or something else. Code for the project is not provided by the authors as it often is for such research; the authors offer to provide the code “on reasonable request.”

More important is the fact the chip may have a hard time stacking up to a lot of competing chips out there, says analyst Gwennap. …

What the paper calls ANN and SNN are two very different means of solving similar problems, kind of like rotating (helicopter) and fixed wing (airplane) are for aviation,” says Gwennap. “Ultimately, I expect ANN [?] and SNN [spiking neural network] to serve different end applications, but I don’t see a need to combine them in a single chip; you just end up with a chip that is OK for two things but not great for anything.”

But you also end up generating a lot of buzz, and given the tension between the U.S. and China over all things tech, and especially A.I., the notion China is stealing a march on the U.S. in artificial general intelligence — whatever that may be — is a summer sizzler of a headline.

ANN could be either artificial neural network or something mentioned earlier in Ray’s article, a shortened version of CANN [continuous attractor neural network].

Shelly Fan’s August 7, 2019 article for the SingularityHub is almost as enthusiastic about the work as the podcasters for Nature magazine  were (a little more about that later),

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

BTW, Fan is a neuroscientist (from her SingularityHub profile page),

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF [University of California at San Francisco] to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, “Will AI Replace Us?” (Thames & Hudson) will be out April 2019.

Onto Nature. Here’s a link to and a citation for the paper,

Towards artificial general intelligence with hybrid Tianjic chip architecture by Jing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shuang Wu, Guanrui Wang, Zhe Zou, Zhenzhi Wu, Wei He, Feng Chen, Ning Deng, Si Wu, Yu Wang, Yujie Wu, Zheyu Yang, Cheng Ma, Guoqi Li, Wentao Han, Huanglong Li, Huaqiang Wu, Rong Zhao, Yuan Xie & Luping Shi. Nature volume 572, pages106–111(2019) DOI: https//doi.org/10.1038/s41586-019-1424-8 Published: 31 July 2019 Issue Date: 01 August 2019

This paper is behind a paywall.

The July 31, 2019 Nature podcast, which includes a segment about the Tianjic chip research from China, which is at the 9 mins. 13 secs. mark (AI hardware) or you can scroll down about 55% of the way to the transcript of the interview with Luke Fleet, the Nature editor who dealt with the paper.

Some thoughts

The pundits put me in mind of my own reaction when I heard about phones that could take pictures. I didn’t see the point but, as it turned out, there was a perfectly good reason for combining what had been two separate activities into one device. It was no longer just a telephone and I had completely missed the point.

This too may be the case with the Tianjic chip. I think it’s too early to say whether or not it represents a new type of chip or if it’s a dead end.

Connecting biological and artificial neurons (in UK, Switzerland, & Italy) over the web

Caption: The virtual lab connecting Southampton, Zurich and Padova. Credit: University of Southampton

A February 26, 2020 University of Southampton press release (also on EurekAlert) describes this work,

Research on novel nanoelectronics devices led by the University of Southampton enabled brain neurons and artificial neurons to communicate with each other. This study has for the first time shown how three key emerging technologies can work together: brain-computer interfaces, artificial neural networks and advanced memory technologies (also known as memristors). The discovery opens the door to further significant developments in neural and artificial intelligence research.

Brain functions are made possible by circuits of spiking neurons, connected together by microscopic, but highly complex links called ‘synapses’. In this new study, published in the scientific journal Nature Scientific Reports, the scientists created a hybrid neural network where biological and artificial neurons in different parts of the world were able to communicate with each other over the internet through a hub of artificial synapses made using cutting-edge nanotechnology. This is the first time the three components have come together in a unified network.

During the study, researchers based at the University of Padova in Italy cultivated rat neurons in their laboratory, whilst partners from the University of Zurich and ETH Zurich created artificial neurons on Silicon microchips. The virtual laboratory was brought together via an elaborate setup controlling nanoelectronic synapses developed at the University of Southampton. These synaptic devices are known as memristors.

The Southampton based researchers captured spiking events being sent over the internet from the biological neurons in Italy and then distributed them to the memristive synapses. Responses were then sent onward to the artificial neurons in Zurich also in the form of spiking activity. The process simultaneously works in reverse too; from Zurich to Padova. Thus, artificial and biological neurons were able to communicate bidirectionally and in real time.

Themis Prodromakis, Professor of Nanotechnology and Director of the Centre for Electronics Frontiers at the University of Southampton said “One of the biggest challenges in conducting research of this kind and at this level has been integrating such distinct cutting edge technologies and specialist expertise that are not typically found under one roof. By creating a virtual lab we have been able to achieve this.”

The researchers now anticipate that their approach will ignite interest from a range of scientific disciplines and accelerate the pace of innovation and scientific advancement in the field of neural interfaces research. In particular, the ability to seamlessly connect disparate technologies across the globe is a step towards the democratisation of these technologies, removing a significant barrier to collaboration.

Professor Prodromakis added “We are very excited with this new development. On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI [artificial intelligence] chips.”

I’m fascinated by this work and after taking a look at the paper, I have to say, the paper is surprisingly accessible. In other words, I think I get the general picture. For example (from the Introduction to the paper; citation and link follow further down),

… To emulate plasticity, the memristor MR1 is operated as a two-terminal device through a control system that receives pre- and post-synaptic depolarisations from one silicon neuron (ANpre) and one biological neuron (BN), respectively. …

If I understand this properly, they’ve integrated a biological neuron and an artificial neuron in a single system across three countries.

For those who care to venture forth, here’s a link and a citation for the paper,

Memristive synapses connect brain and silicon spiking neurons by Alexantrou Serb, Andrea Corna, Richard George, Ali Khiat, Federico Rocchi, Marco Reato, Marta Maschietto, Christian Mayr, Giacomo Indiveri, Stefano Vassanelli & Themistoklis Prodromakis. Scientific Reports volume 10, Article number: 2590 (2020) DOI: https://doi.org/10.1038/s41598-020-58831-9 Published 25 February 2020

The paper is open access.

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).