Category Archives: neuromorphic engineering

Dynamic molecular switches for brainlike computing at the University of Limerick

Aren’t memristors proof that brainlike computing at the molecular and atomic levels is possible? It seems I have misunderstood memristors according to this November 21, 2022 news item on ScienceDaily,

A breakthrough discovery at University of Limerick in Ireland has revealed for the first time that unconventional brain-like computing at the tiniest scale of atoms and molecules is possible.

Researchers at University of Limerick’s Bernal Institute worked with an international team of scientists to create a new type of organic material that learns from its past behaviour.

The discovery of the ‘dynamic molecular switch’ that emulate[s] synaptic behaviour is revealed in a new study in the international journal Nature Materials.

The study was led by Damien Thompson, Professor of Molecular Modelling in UL’s Department of Physics and Director of SSPC, the UL-hosted Science Foundation Ireland Research Centre for Pharmaceuticals, together with Christian Nijhuis at the Centre for Molecules and Brain-Inspired Nano Systems in University of Twente [Netherlands] and Enrique del Barco from University of Central Florida.

A November 21, 2022 University of Limerick press release (also on EurekAlert), which originated the news item, provides more technical details about the research,

Working during lockdowns, the team developed a two-nanometre thick layer of molecules, which is 50,000 times thinner than a strand of hair and remembers its history as electrons pass through it.

Professor Thompson explained that the “switching probability and the values of the on/off states continually change in the molecular material, which provides a disruptive new alternative to conventional silicon-based digital switches that can only ever be either on or off”.

The newly discovered dynamic organic switch displays all the mathematical logic functions necessary for deep learning, successfully emulating Pavlovian ‘call and response’ synaptic brain-like behaviour.

The researchers demonstrated the new materials properties using extensive experimental characterisation and electrical measurements supported by multi-scale modelling spanning from predictive modelling of the molecular structures at the quantum level to analytical mathematical modelling of the electrical data.

To emulate the dynamical behaviour of synapses at the molecular level, the researchers combined fast electron transfer (akin to action potentials and fast depolarization processes in biology) with slow proton coupling limited by diffusion (akin to the role of biological calcium ions or neurotransmitters).

Since the electron transfer and proton coupling steps inside the material occur at very different time scales, the transformation can emulate the plastic behaviour of synapse neuronal junctions, Pavlovian learning, and all logic gates for digital circuits, simply by changing the applied voltage and the duration of voltage pulses during the synthesis, they explained.

“This was a great lockdown project, with Chris, Enrique and I pushing each other through zoom meetings and gargantuan email threads to bring our teams combined skills in materials modelling, synthesis and characterisation to the point where we could demonstrate these new brain-like computing properties,” explained Professor Thompson.

“The community has long known that silicon technology works completely differently to how our brains work and so we used new types of electronic materials based on soft molecules to emulate brain-like computing networks.”

The researchers explained that the method can in the future be applied to dynamic molecular systems driven by other stimuli such as light and coupled to different types of dynamic covalent bond formation.

This breakthrough opens up a whole new range of adaptive and reconfigurable systems, creating new opportunities in sustainable and green chemistry, from more efficient flow chemistry production of drug products and other value-added chemicals to development of new organic materials for high density computing and memory storage in big data centres.

“This is just the start. We are already busy expanding this next generation of intelligent molecular materials, which is enabling development of sustainable alternative technologies to tackle grand challenges in energy, environment, and health,” explained Professor Thompson.

Professor Norelee Kennedy, Vice President Research at UL, said: “Our researchers are continuously finding new ways of making more effective, more sustainable materials. This latest finding is very exciting, demonstrating the reach and ambition of our international collaborations and showcasing our world-leading ability at UL to encode useful properties into organic materials.”

Here’s a link to and a citation for the paper,

Dynamic molecular switches with hysteretic negative differential conductance emulating synaptic behaviour by Yulong Wang, Qian Zhang, Hippolyte P. A. G. Astier, Cameron Nickle, Saurabh Soni, Fuad A. Alami, Alessandro Borrini, Ziyu Zhang, Christian Honnigfort, Björn Braunschweig, Andrea Leoncini, Dong-Cheng Qi, Yingmei Han, Enrique del Barco, Damien Thompson & Christian A. Nijhuis. Nature Materials volume 21, pages 1403–1411 (2022) DOI: https://doi.org/10.1038/s41563-022-01402-2 Published: 21 November 2022 Issue Date: December 2022

This paper is behind a paywall.

Sleep helps artificial neural networks (ANNs) to keep learning without “catastrophic forgetting”

A November 18, 2022 news item on phys.org describes some of the latest work on neuromorphic (brainlike) computing from the University of California at San Diego (UCSD or UC San Diego), Note: Links have been removed,

Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

“The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, Ph.D., professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

“In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. 

A November 18, 2022 UC San Diego news release (also one EurekAlert), which originated the news item, adds some technical details,

The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, “sleep” for the networks allowed them to replay old memories without explicitly using old training data. 

Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons. 

“When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay. 

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting. 

“It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory. 

“In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

Here’s a link to and a citation for the paper,

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation by Ryan Golden, Jean Erik Delanois, Pavel Sanda, Maxim Bazhenov. PLOS [Computational Biology] DOI: https://doi.org/10.1371/journal.pcbi.1010628 Published: November 18, 2022

This paper is open access.

Transforming bacterial cells into living computers

If this were a movie instead of a press release, we’d have some ominous music playing over a scene in a pristine white lab. Instead, we have a November 13, 2022 Technion-Israel Institute of Technology press release (also on EurekAlert) where the writer tries to highlight the achievement while downplaying the sort of research (in synthetic biology) that could have people running for the exits,

Bringing together concepts from electrical engineering and bioengineering tools, Technion and MIT [Massachusetts Institute of Technology] scientists collaborated to produce cells engineered to compute sophisticated functions – “biocomputers” of sorts. Graduate students and researchers from Technion – Israel Institute of Technology Professor Ramez Daniel’s Laboratory for Synthetic Biology & Bioelectronics worked together with Professor Ron Weiss from the Massachusetts Institute of Technology to create genetic “devices” designed to perform computations like artificial neural circuits. Their results were recently published in Nature Communications.

The genetic material was inserted into the bacterial cell in the form of a plasmid: a relatively short DNA molecule that remains separate from the bacteria’s “natural” genome. Plasmids also exist in nature, and serve various functions. The research group designed the plasmid’s genetic sequence to function as a simple computer, or more specifically, a simple artificial neural network. This was done by means of several genes on the plasmid regulating each other’s activation and deactivation according to outside stimuli.

What does it mean that a cell is a circuit? How can a computer be biological?

At its most basic level, a computer consists of 0s and 1s, of switches. Operations are performed on these switches: summing them, picking the maximal or minimal value between them, etc. More advanced operations rely on the basic ones, allowing a computer to play chess or fly a rocket to the moon.

In the electronic computers we know, the 0/1 switches take the form of transistors. But our cells are also computers, of a different sort. There, the presence or absence of a molecule can act as a switch. Genes activate, trigger or suppress other genes, forming, modifying, or removing molecules. Synthetic biology aims (among other goals) to harness these processes, to synthesize the switches and program the genes that would make a bacterial cell perform complex tasks. Cells are naturally equipped to sense chemicals and to produce organic molecules. Being able to “computerize” these processes within the cell could have major implications for biomanufacturing and have multiple medical applications.

The Ph.D students (now doctors) Luna Rizik and Loai Danial, together with Dr. Mouna Habib, under the guidance of Prof. Ramez Daniel from the Faculty of Biomedical Engineering at the Technion, and in collaboration with Prof. Ron Weiss from the Synthetic Biology Center, MIT,  were inspired by how artificial neural networks function. They created synthetic computation circuits by combining existing genetic “parts,” or engineered genes, in novel ways, and implemented concepts from neuromorphic electronics into bacterial cells. The result was the creation of bacterial cells that can be trained using artificial intelligence algorithms.

The group were able to create flexible bacterial cells that can be dynamically reprogrammed to switch between reporting whether at least one of a test chemicals, or two, are present (that is, the cells were able to switch between performing the OR and the AND functions). Cells that can change their programming dynamically are capable of performing different operations under different conditions. (Indeed, our cells do this naturally.) Being able to create and control this process paves the way for more complex programming, making the engineered cells suitable for more advanced tasks. Artificial Intelligence algorithms allowed the scientists to produce the required genetic modifications to the bacterial cells at a significantly reduced time and cost.

Going further, the group made use of another natural property of living cells: they are capable of responding to gradients. Using artificial intelligence algorithms, the group succeeded in harnessing this natural ability to make an analog-to-digital converter – a cell capable of reporting whether the concentration of a particular molecule is “low”, “medium”, or “high.” Such a sensor could be used to deliver the correct dosage of medicaments, including cancer immunotherapy and diabetes drugs.

Of the researchers working on this study, Dr. Luna Rizik and Dr. Mouna Habib hail from the Department of Biomedical Engineering, while Dr. Loai Danial is from the Andrew and Erna Viterbi Faculty of Electrical Engineering. It is bringing the two fields together that allowed the group to make the progress they did in the field of synthetic biology.

This work was partially funded by the Neubauer Family Foundation, the Israel Science Foundation (ISF), European Union’s Horizon 2020 Research and Innovation Programme, the Technion’s Lorry I. Lokey interdisciplinary Center for Life Sciences and Engineering, and the [US Department of Defense] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Synthetic neuromorphic computing in living cells by Luna Rizik, Loai Danial, Mouna Habib, Ron Weiss & Ramez Daniel. Nature Communications volume 13, Article number: 5602 (2022) DOIL https://doi.org/10.1038/s41467-022-33288-8 Published: 24 September 2022

This paper is open access.

A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices

A November 9, 2022 Science China Press press release on EurekAlert announces a new approach to developing neuromorphic (brainlike) devices,

Neuromorphic computing is an information processing model that simulates the efficiency of the human brain with multifunctionality and flexibility. Currently, artificial synaptic devices represented by memristors have been extensively used in neural morphological computing, and different types of neural networks have been developed. However, it is time-consuming and laborious to perform fixing and redeploying of weights stored by traditional artificial synaptic devices. Moreover, synaptic strength is primarily reconstructed via software programming and changing the pulse time, which can result in low efficiency and high energy consumption in neural morphology computing applications.

In a novel research article published in the Beijing-based National Science Review, Prof. Lili Wang from the Chinese Academy of Sciences and her colleagues present a novel hardware neural network based on a tunable flexible MXene energy storage (FMES) system. The system comprises flexible postsynaptic electrodes and MXene nanosheets, which are connected with the presynaptic electrodes using electrolytes. The potential changes in the ion migration process and adsorption in the supercapacitor can simulate information transmission in the synaptic gap. Additionally, the voltage of the FMES system represents the synaptic weight of the connection between two neurons.

Researchers explored the changes of paired-pulse facilitation under different resistance levels to investigate the effect of resistance on the advanced learning and memory behavior of the artificial synaptic system of FMES. The results revealed that the larger the standard deviation, the stronger the memory capacity of the system. In other words, with the continuous improvement of electrical resistance and stimulation time, the memory capacity of the artificial synaptic system of FMES is gradually improved. Therefore, the system can effectively control the accumulation and dissipation of ions by regulating the resistance value in the system without changing the external stimulus, which is expected to realize the coupling of sensing signals and storage weight.

The FMES system can be used to develop neural networks and realize various neural morphological computing tasks, making the recognition accuracy of handwritten digit sets reach 95%. Additionally, the FMES system can simulate the adaptivity of the human brain to achieve adaptive recognition of similar target data sets. Following the training process, the adaptive recognition accuracy can reach approximately 80%, and avoid the time and energy loss caused by recalculation.

“In the future, based on this research, different types of sensors can be integrated on the chip to further realize multimodal sensing computing integrated architecture.” Prof. Lili Wang stated, “The device can perform low-energy calculations, and is expected to solve the problems of high write noise, nonlinear difference, and diffusion under zero bias voltage in certain neural morphological systems.”

Here’s a link to and a citation for the paper,

Neuromorphic-computing-based adaptive learning using ion dynamics in flexible energy storage devices by Shufang Zhao, Wenhao Ran, Zheng Lou, Linlin Li, Swapnadeep Poddar, Lili Wang, Zhiyong Fan, Guozhen Shen. National Science Review, Volume 9, Issue 11, November 2022, nwac158, EOI: https://doi.org/10.1093/nsr/nwac158 Published: 13 August 2022

This paper is open access.

The future (or roadmap for) of Chinese research on neuromorphic engineering

While I was trying (unsuccessfully) to find a copy of the press release on the issuing agency’s website, I found this paper,

2022 roadmap on neuromorphic devices & applications research in China by Qing Wan, Changjin Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Peng Zhou, LinChen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. T. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, Li Qiang Zhu, Jianyu Du, Chen Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su-Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lunyao Wang, Yinshui Xia, Chen Mu, Feng Lin, Chixiao Chen, Bojun Cheng, Yannan Xing, Weitao Zeng, Hong Chen, Lei Yu, Giacomo Indiveri and Ning Qiao. Neuromorphic Computing and Engineering DOI: 10.1088/2634-4386/ac7a5a *Accepted Manuscript online 20 June 2022 • © 2022 The Author(s). Published by IOP Publishing Ltd

The paper is open access.

*From the IOP’s Definitions of article versions: Accepted Manuscript is ‘the version of the article accepted for publication including all changes made as a result of the peer review process, and which may also include the addition to the article by IOP of a header, an article ID, a cover sheet and/or an ‘Accepted Manuscript’ watermark, but excluding any other editing, typesetting or other changes made by IOP and/or its licensors’.*

This is neither the published version nor the version of record.

Neuromorphic (brainlike) computing and your car (a Mercedes Benz Vision AVTR concept car)

If you’ve ever fantasized about a batmobile of your own, the dream could come true soon,

Mercedes Berz VISION AVTR [downloaded from https://www.mercedes-benz.com/en/innovation/concept-cars/vision-avtr/]

It was the mention of neuromorphic computing in a television ad sometime in September 2022 that sent me on a mission to find out what Mercedes Benz means when they use neuromorphic computing to describe a feature found in their Vision AVTR concept car. First, a little bit about the car (from the Vision AVTR webpage accessed in October 2022),

VISION AVTR – inspired by AVATAR.

The name of the groundbreaking concept vehicle stands not only for the close collaboration in developing the show car together with the AVATAR team but also for ADVANCED VEHICLE TRANSFORMATION. This concept vehicle embodies the vision of Mercedes-Benz designers, engineers and trend researchers for mobility in the distant future.

,,,

Organic battery technology.

The VISION AVTR was designed in line with its innovative electric drive. This is based on a particularly powerful and compact high-voltage battery. For the first time, the revolutionary battery technology is based on graphene-based [emphasis mine] organic cell chemistry and thus completely eliminates rare, toxic and expensive earths such as metals. Electromobility thus becomes independent of fossil resources. An absolute revolution is also the recyclability by composting, which is 100% recyclable due to the materiality. As a result, Mercedes-Benz underlines the high relevance of a future circular economy in the raw materials sector.

Masterpiece of efficiency.

At Mercedes-Benz, the consideration of efficiency goes far beyond the drive concept, because with increasing digitalisation, the performance of the large number of so-called secondary consumers also comes into focus – along with their efficient energy supply, without negatively affecting the drive power of the vehicle itself. Energy consumption per computing operation is already a key target in the development of new computer chips. This trend will continue in the coming years with the growth of sensors and artificial intelligence in the automotive industry. The neuro-inspired approach of the VISION AVTR, including so-called neuromorphic hardware, promises to minimise the energy requirements of sensors, chips and other components to a few watts. [emphasis mine] Their energy supply is provided by the cached current of the integrated solar plates on the back of the VISION AVTR. The 33 multi-directionally movable surface elements act as “bionic flaps”.

Interior and exterior merge.

For the first time, Mercedes-Benz has worked with a completely new design approach in the design of the VISION AVTR. The holistic concept combines the design disciplines interior, exterior and UX [user experience] from the first sketch. Man and human perception are the starting point of a design process from the inside out. The design process begins with the experience of the passengers and consciously focuses on the perception and needs of the passengers. The goal was to create a car that prolongs the perception of its passengers. It was also a matter of creating an immersive experience space in which passengers connect with each other, with the vehicle and the surrounding area [emphasis mine ] in a unique way.

Intuitive control.

The VISION AVTR already responds to the approach of the passengers by visualising the energy and information flow of the environment with digital neurons that flow through the grille through the wheels to the rear area. The first interaction in the interior between man and vehicle happens completely intuitively via the control unit: by placing the hand on the centre console, the interior comes to life and the vehicle recognises the driver by his breathing. This is made visible on the instrument panel and on the user’s hand. The VISION AVTR thus establishes a biometric connection with the driver [emphasis mine] and increases his awareness of the environment. The digital neurons flow from the interior into the exterior and visualise the flow of energy and information. For example, when driving, the neurons flow over the outside of the vehicle. [emphasis mine] When changing direction, the energy flows to the corresponding side of the vehicle.

The vehicle as an immersive experience space.

The visual connection between passengers and the outside world is created by the curved display module, which replaces a conventional dashboard. The outside world around the vehicle and the surrounding area is shown in real-time 3D graphics and at the same time shows what is happening on the road in front of the vehicle. Combined with energy lines, these detailed real-time images bring the interior to life and allow passengers to discover and interact with the environment in a natural way with different views of the outside world. Three wonders of nature – the Huangshan Mountains of China, the 115-metre-high Hyperion Tree found in the United States and the pink salt Lake Hillier from Australia – can be explored in detail. Passengers become aware of various forces of nature that are not normally visible to the human eye, such as magnetic fields, bioenergy or ultraviolet light.

The curved display module in the Mercedes-Benz VISION AVTR – inspired by AVATAR
[downloaded from https://www.mercedes-benz.com/en/innovation/concept-cars/vision-avtr/]

Bionic formal language.

When the boundaries between vehicle and living beings are lifted, Mercedes-Benz combines luxury and sustainability and works to make the vehicles as resource-saving as possible. With the VISION AVTR, the brand is now showing how a vehicle can blend harmoniously into its environment and communicate with it. In the ecosystem of the future, the ultimate luxury is the fusion of human and nature with the help of technology. The VISION AVTR is thus an example of sustainable luxury in the field of design. As soon as you get in, the car becomes an extension of your own body and a tool to discover the environment much as in the film humans can use avatars to extend and expand their abilities.

A few thoughts

The movie, Avatar, was released in 2009 and recently rereleased in movie houses in anticipation of the sequel, Avatar: The Way of Water to be released in December 2022 (Avatar [2009 film] Wikipedia entry). The timing, Avatar and AVTR, is interesting, oui?

Moving onto ‘organic’, which means carbon-based in this instance and, specifically, graphene. Commercialization of graphene is likely top-of-mind for the folks (European Commission) who bet 1B Euros in 2013 with European Union money to fund the Graphene Flagship project. This battery from German company Mercedes Benz must be exciting news for the funders and for people who want to lessen dependency on rare earths. Your battery can be composted safely (according to the advertising).

The other piece of good news, is the neuromorphic computing,

“The neuro-inspired approach of the VISION AVTR, including so-called neuromorphic hardware, promises to minimise the energy requirements of sensors, chips and other components to a few watts.”

On the other hand and keeping in mind the image above (a hand with what looks like an embedded object), it seems a little disconcerting to merge with one’s car, “… passengers connect with each other, with the vehicle and the surrounding area …” which becomes even more disconcerting when this appears in the advertising,

… VISION AVTR thus establishes a biometric connection with the driver … The digital neurons flow from the interior into the exterior and visualise the flow of energy and information. For example, when driving, the neurons flow over the outside of the vehicle.

Are these ‘digital neurons’ flowing around the car like a water current? Also, the car is visualizing? Hmm …

I did manage to find a bit more information about neuromorphic computing although it’s for a different Mercedes Benz concept car (there’s no mention of flowing digital neurons) in a January 18, 2022 article by Sally Ward-Foxton for EE Times (Note: A link has been removed),

The Mercedes Vision EQXX concept car, promoted as “the most efficient Mercedes-Benz ever built,” incorporates neuromorphic computing to help reduce power consumption and extend vehicle range. To that end, BrainChip’s Akida neuromorphic chip enables in-cabin keyword spotting as a more power-efficient way than existing AI-based keyword detection systems.

“Working with California-based artificial intelligence experts BrainChip, Mercedes-Benz engineers developed systems based on BrainChip’s Akida hardware and software,” Mercedes noted in a statement describing the Vision EQXX. “The example in the Vision EQXX is the “Hey Mercedes” hot-word detection. Structured along neuromorphic principles, it is five to ten times more efficient than conventional voice control,” the carmaker claimed.

That represents validation of BrainChip’s technology by one of its early-access customers. BrainChip’s Akida chip accelerates spiking neural networks (SNNs) and convolutional neural networks (via conversion to SNNs). It is not limited to a particular application, and also run [sic] person detection, voice or face recognition SNNs, for example, that Mercedes could also explore.

This January 6, 2022 article by Nitin Dahad for embedded.com describes what were then the latest software innovations in the automotive industry and segues into a description of spiking neural networks (Note: A link has been removed),

The electric vehicle (EV) has clearly become a key topic of discussion, with EV range probably the thing most consumers are probably worried about. To address the range concern, two stories emerged this week – one was Mercedes-Benz’ achieving a 1,000 km range with its VISION EQXX prototype, albeit as a concept car, and General Motors announcing during a CES [Consumer Electronics Show] 2022 keynote its new Chevrolet Silverado EV with 400-mile (640 km) range.

In briefings with companies, I often hear them talk about the software-defined car and the extensive use of software simulation (or we could also call it a digital twin). In the case of both the VISION EQXX and the Silverado EV, software plays a key part. I also spoke to BlackBerry about its IVY platform and how it is laying the groundwork for software-defined vehicles.

Neuromorphic computing for infotainment

This efficiency is not just being applied to enhancing range though. Mercedes-Benz also points out that its infotainment system uses neuromorphic computing to enable the car to take to “take its cue from the way nature thinks”.

Mercedes-Benz VISION EQXXMercedes-Benz VISION EQXX

The hardware runs spiking neural networks, in which data is coded in discrete spikes and energy only consumed when a spike occurs, reducing energy consumption by orders of magnitude. In order to deliver this, the carmaker worked with BrainChip, developing the systems based on its Akida processor. In the VISION EQXX, this technology enables the “Hey Mercedes” hot-word detection five to ten times more efficiently than conventional voice control. Mercedes-Benz said although neuromorphic computing is still in its infancy, systems like these will be available on the market in just a few years. When applied on scale throughout a vehicle, they have the potential to radically reduce the energy needed to run the latest AI technologies.

For anyone curious about BrainChip, you can find out more here.

It took a little longer than I hoped but I’m glad that I found out a little more about neuromorphic computing and one application in the automotive industry.

Studying quantum conductance in memristive devices

A September 27, 2022 news item on phys.org provides an introduction to the later discussion of quantum effects in memristors,

At the nanoscale, the laws of classical physics suddenly become inadequate to explain the behavior of matter. It is precisely at this juncture that quantum theory comes into play, effectively describing the physical phenomena characteristic of the atomic and subatomic world. Thanks to the different behavior of matter on these length and energy scales, it is possible to develop new materials, devices and technologies based on quantum effects, which could yield a real quantum revolution that promises to innovate areas such as cryptography, telecommunications and computation.

The physics of very small objects, already at the basis of many technologies that we use today, is intrinsically linked to the world of nanotechnologies, the branch of applied science dealing with the control of matter at the nanometer scale (a nanometer is one billionth of a meter). This control of matter at the nanoscale is at the basis of the development of new electronic devices.

A September 27, 2022 Istituto Nazionale di Ricerca Metrologica (INRIM) press release (summary, PDF, and also on EurekAlert), which originated the news item, provides more information about the research,

Among these, memrisistors are considered promising devices for the realization of new computational architectures emulating functions of our brain, allowing the creation of increasingly efficient computation systems suitable for the development of the entire artificial intelligence sector, as recently shown by INRiM researchers in collaboration with several international universities and research institutes [1,2].

In this context, the EMPIR MEMQuD project, coordinated by INRiM, aims to study the quantum effects in such devices in which the electronic conduction properties can be manipulated allowing the observation of quantized conductivity phenomena at room temperature. In addition to analyzing the fundamentals and recent developments, the review work “Quantum Conductance in Memristive Devices: Fundamentals, Developments, and Applications” recently published in the prestigious international journal Advanced Materials (https://doi.org/10.1002/adma.202201248) analyzes how these effects can be used for a wide range of applications, from metrology to the development of next-generation memories and artificial intelligence.

Here’s a link to and a citation for the paper,

Quantum Conductance in Memristive Devices: Fundamentals, Developments, and Applications by Gianluca Milano, Masakazu Aono, Luca Boarino, Umberto Celano, Tsuyoshi Hasegawa, Michael Kozicki, Sayani Majumdar, Mariela Menghini, Enrique Miranda, Carlo Ricciardi, Stefan Tappertzhofen, Kazuya Terabe, Ilia Valov. Advanced Materials Volume 34, Issue32 August 11, 2022 2201248 DOI: https://doi.org/10.1002/adma.202201248 First published: 11 April 2022

This paper is open access.

You can find the EMPIR (European Metrology Programme for Innovation and Research) MEMQuD (quantum effects in memristive devices) project here, from the homepage,

Memristive devices are electrical resistance switches that couple ionics (i.e. dynamics of ions) with electronics. These devices offer a promising platform to observe quantum effects in air, at room temperature, and without an applied magnetic field. For this reason, they can be traced to fundamental physics constants fixed in the revised International System of Units (SI) for the realization of a quantum-based standard of resistance. However, as an emerging technology, memristive devices lack standardization and insights into the fundamental physics underlying its working principles.

The overall aim of the project is to investigate and exploit quantized conductance effects in memristive devices that operate reliably, in air and at room temperature. In particular, the project will focus on the development of memristive model systems and nanometrological characterization techniques at the nanoscale level of memristive devices, in order to better understand and control the quantized effects in memristive devices. Such an outcome would enable not only the development of neuromorphic systems but also the realization of a standard of resistance implementable on-chip for self-calibrating systems with zero-chain traceability in the spirit of the revised SI.

I’m starting to see mention of ‘neuromorphic computing’ in advertisements (specifically a Mercedes Benz car). I will have more about these first mentions of neuromorphic computing in consumer products in a future posting.

Skin-like computing device analyzes health data with brain-mimicking artificial intelligence (a neuromorphic chip)

The wearable neuromorphic chip, made of stretchy semiconductors, can implement artificial intelligence (AI) to process massive amounts of health information in real time. Above, Asst. Prof. Sihong Wang shows a single neuromorphic device with three electrodes. (Photo by John Zich)

Does everything have to be ‘brainy’? Read on for the latest on ‘brainy’ devices.

An August 4, 2022 University of Chicago news release (also on EurekAlert) describes work on a stretchable neuromorphic chip, Note: Links have been removed,

It’s a brainy Band-Aid, a smart watch without the watch, and a leap forward for wearable health technologies. Researchers at the University of Chicago’s Pritzker School of Molecular Engineering (PME) have developed a flexible, stretchable computing chip that processes information by mimicking the human brain. The device, described in the journal Matter, aims to change the way health data is processed.

“With this work we’ve bridged wearable technology with artificial intelligence and machine learning to create a powerful device which can analyze health data right on our own bodies,” said Sihong Wang, a materials scientist and Assistant Professor of Molecular Engineering.

Today, getting an in-depth profile about your health requires a visit to a hospital or clinic. In the future, Wang said, people’s health could be tracked continuously by wearable electronics that can detect disease even before symptoms appear. Unobtrusive, wearable computing devices are one step toward making this vision a reality. 

A Data Deluge
The future of healthcare that Wang—and many others—envision includes wearable biosensors to track complex indicators of health including levels of oxygen, sugar, metabolites and immune molecules in people’s blood. One of the keys to making these sensors feasible is their ability to conform to the skin. As such skin-like wearable biosensors emerge and begin collecting more and more information in real-time, the analysis becomes exponentially more complex. A single piece of data must be put into the broader perspective of a patient’s history and other health parameters.

Today’s smart phones are not capable of the kind of complex analysis required to learn a patient’s baseline health measurements and pick out important signals of disease. However, cutting-edge artificial intelligence platforms that integrate machine learning to identify patterns in extremely complex datasets can do a better job. But sending information from a device to a centralized AI location is not ideal.

“Sending health data wirelessly is slow and presents a number of privacy concerns,” he said. “It is also incredibly energy inefficient; the more data we start collecting, the more energy these transmissions will start using.”

Skin and Brains
Wang’s team set out to design a chip that could collect data from multiple biosensors and draw conclusions about a person’s health using cutting-edge machine learning approaches. Importantly, they wanted it to be wearable on the body and integrate seamlessly with skin.

“With a smart watch, there’s always a gap,” said Wang. “We wanted something that can achieve very intimate contact and accommodate the movement of skin.”

Wang and his colleagues turned to polymers, which can be used to build semiconductors and electrochemical transistors but also have the ability to stretch and bend. They assembled polymers into a device that allowed the artificial-intelligence-based analysis of health data. Rather than work like a typical computer, the chip— called a neuromorphic computing chip—functions more like a human brain, able to both store and analyze data in an integrated way.

Testing the Technology
To test the utility of their new device, Wang’s group used it to analyze electrocardiogram (ECG) data representing the electrical activity of the human heart. They trained the device to classify ECGs into five categories—healthy or four types of abnormal signals. Then, they tested it on new ECGs. Whether or not the chip was stretched or bent, they showed, it could accurately classify the heartbeats.

More work is needed to test the power of the device in deducing patterns of health and disease. But eventually, it could be used either to send patients or clinicians alerts, or to automatically tweak medications.

“If you can get real-time information on blood pressure, for instance, this device could very intelligently make decisions about when to adjust the patient’s blood pressure medication levels,” said Wang. That kind of automatic feedback loop is already used by some implantable insulin pumps, he added.

He already is planning new iterations of the device to both expand the type of devices with which it can integrate and the types of machine learning algorithms it uses.

“Integration of artificial intelligence with wearable electronics is becoming a very active landscape,” said Wang. “This is not finished research, it’s just a starting point.”

Here’s a link to and a citation for the paper,

Intrinsically stretchable neuromorphic devices for on-body processing of health data with artificial intelligence by Shilei Dai, Yahao Dai, Zixuan Zhao, Jie Xu, Jia Huang, Sihong Wang. Matter DOI:https://doi.org/10.1016/j.matt.2022.07.016 Published: August 04, 2022

This paper is behind a paywall.

New chip for neuromorphic computing runs at a fraction of the energy of today’s systems

An August 17, 2022 news item on Nanowerk announces big (so to speak) claims from a team researching neuromorphic (brainlike) computer chips,

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of artificial intelligence (AI) applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

..

An August 17, 2022 University of California at San Diego (UCSD) news release (also on EurekAlert), which originated the news item, provides more detail than usually found in a news release,

“The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility,” said Weier Wan, the paper’s first corresponding author and a recent Ph.D. graduate of Stanford University who worked on the chip while at UC San Diego, where he was co-advised by Gert Cauwenberghs in the Department of Bioengineering. 

The research team, co-led by bioengineers at the University of California San Diego, presents their results in the Aug. 17 [2022] issue of Nature.

Currently, AI computing is both power hungry and computationally expensive. Most AI applications on edge devices involve moving data from the devices to the cloud, where the AI processes and analyzes it. Then the results are moved back to the device. That’s because most edge devices are battery-powered and as a result only have a limited amount of power that can be dedicated to computing. 

By reducing power consumption needed for AI inference at the edge, this NeuRRAM chip could lead to more robust, smarter and accessible edge devices and smarter manufacturing. It could also lead to better data privacy as the transfer of data from devices to the cloud comes with increased security risks. 

On AI chips, moving data from memory to computing units is one major bottleneck. 

“It’s the equivalent of doing an eight-hour commute for a two-hour work day,” Wan said. 

To solve this data transfer issue, researchers used what is known as resistive random-access memory, a type of non-volatile memory that allows for computation directly within memory rather than in separate computing units. RRAM and other emerging memory technologies used as synapse arrays for neuromorphic computing were pioneered in the lab of Philip Wong, Wan’s advisor at Stanford and a main contributor to this work. Computation with RRAM chips is not necessarily new, but generally it leads to a decrease in the accuracy of the computations performed on the chip and a lack of flexibility in the chip’s architecture. 

“Compute-in-memory has been common practice in neuromorphic engineering since it was introduced more than 30 years ago,” Cauwenberghs said.  “What is new with NeuRRAM is that the extreme efficiency now goes together with great flexibility for diverse AI applications with almost no loss in accuracy over standard digital general-purpose compute platforms.”

A carefully crafted methodology was key to the work with multiple levels of “co-optimization” across the abstraction layers of hardware and software, from the design of the chip to its configuration to run various AI tasks. In addition, the team made sure to account for various constraints that span from memory device physics to circuits and network architecture. 

“This chip now provides us with a platform to address these problems across the stack from devices and circuits to algorithms,” said Siddharth Joshi, an assistant professor of computer science and engineering at the University of Notre Dame , who started working on the project as a Ph.D. student and postdoctoral researcher in Cauwenberghs lab at UC San Diego. 

Chip performance

Researchers measured the chip’s energy efficiency by a measure known as energy-delay product, or EDP. EDP combines both the amount of energy consumed for every operation and the amount of times it takes to complete the operation. By this measure, the NeuRRAM chip achieves 1.6 to 2.3 times lower EDP (lower is better) and 7 to 13 times higher computational density than state-of-the-art chips. 

Researchers ran various AI tasks on the chip. It achieved 99% accuracy on a handwritten digit recognition task; 85.7% on an image classification task; and 84.7% on a Google speech command recognition task. In addition, the chip also achieved a 70% reduction in image-reconstruction error on an image-recovery task. These results are comparable to existing digital chips that perform computation under the same bit-precision, but with drastic savings in energy. 

Researchers point out that one key contribution of the paper is that all the results featured are obtained directly on the hardware. In many previous works of compute-in-memory chips, AI benchmark results were often obtained partially by software simulation. 

Next steps include improving architectures and circuits and scaling the design to more advanced technology nodes. Researchers also plan to tackle other applications, such as spiking neural networks.

“We can do better at the device level, improve circuit design to implement additional features and address diverse applications with our dynamic NeuRRAM platform,” said Rajkumar Kubendran, an assistant professor for the University of Pittsburgh, who started work on the project while a Ph.D. student in Cauwenberghs’ research group at UC San Diego.

In addition, Wan is a founding member of a startup that works on productizing the compute-in-memory technology. “As a researcher and  an engineer, my ambition is to bring research innovations from labs into practical use,” Wan said. 

New architecture 

The key to NeuRRAM’s energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. 

In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron’s connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure. 

To make sure that accuracy of the AI computations can be preserved across various neural network architectures, researchers developed a set of hardware algorithm co-optimization techniques. The techniques were verified on various neural networks including convolutional neural networks, long short-term memory, and restricted Boltzmann machines. 

As a neuromorphic AI chip, NeuroRRAM performs parallel distributed processing across 48 neurosynaptic cores. To simultaneously achieve high versatility and high efficiency, NeuRRAM supports data-parallelism by mapping a layer in the neural network model onto multiple cores for parallel inference on multiple data. Also, NeuRRAM offers model-parallelism by mapping different layers of a model onto different cores and performing inference in a pipelined fashion.

An international research team

The work is the result of an international team of researchers. 

The UC San Diego team designed the CMOS circuits that implement the neural functions interfacing with the RRAM arrays to support the synaptic functions in the chip’s architecture, for high efficiency and versatility. Wan, working closely with the entire team, implemented the design; characterized the chip; trained the AI models; and executed the experiments. Wan also developed a software toolchain that maps AI applications onto the chip. 

The RRAM synapse array and its operating conditions were extensively characterized and optimized at Stanford University. 

The RRAM array was fabricated and integrated onto CMOS at Tsinghua University. 

The Team at Notre Dame contributed to both the design and architecture of the chip and the subsequent machine learning model design and training.

The research started as part of the National Science Foundation funded Expeditions in Computing project on Visual Cortex on Silicon at Penn State University, with continued funding support from the Office of Naval Research Science of AI program, the Semiconductor Research Corporation and DARPA [{US} Defense Advanced Research Projects Agency] JUMP program, and Western Digital Corporation. 

Here’s a link to and a citation for the paper,

A compute-in-memory chip based on resistive random-access memory by Weier Wan, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H.-S. Philip Wong & Gert Cauwenberghs. Nature volume 608, pages 504–512 (2022) DOI: https://doi.org/10.1038/s41586-022-04992-8 Published: 17 August 2022 Issue Date: 18 August 2022

This paper is open access.

Synaptic transistors for brainlike computers based on (more environmentally friendly) graphene

An August 9, 2022 news item on ScienceDaily describes research investigating materials other than silicon for neuromorphic (brainlike) computing purposes,

Computers that think more like human brains are inching closer to mainstream adoption. But many unanswered questions remain. Among the most pressing, what types of materials can serve as the best building blocks to unlock the potential of this new style of computing.

For most traditional computing devices, silicon remains the gold standard. However, there is a movement to use more flexible, efficient and environmentally friendly materials for these brain-like devices.

In a new paper, researchers from The University of Texas at Austin developed synaptic transistors for brain-like computers using the thin, flexible material graphene. These transistors are similar to synapses in the brain, that connect neurons to each other.

An August 8, 2022 University of Texas at Austin news release (also on EurekAlert but published August 9, 2022), which originated the news item, provides more detail about the research,

“Computers that think like brains can do so much more than today’s devices,” said Jean Anne Incorvia, an assistant professor in the Cockrell School of Engineering’s Department of Electrical and Computer Engineer and the lead author on the paper published today in Nature Communications. “And by mimicking synapses, we can teach these devices to learn on the fly, without requiring huge training methods that take up so much power.”

The Research: A combination of graphene and nafion, a polymer membrane material, make up the backbone of the synaptic transistor. Together, these materials demonstrate key synaptic-like behaviors — most importantly, the ability for the pathways to strengthen over time as they are used more often, a type of neural muscle memory. In computing, this means that devices will be able to get better at tasks like recognizing and interpreting images over time and do it faster.

Another important finding is that these transistors are biocompatible, which means they can interact with living cells and tissue. That is key for potential applications in medical devices that come into contact with the human body. Most materials used for these early brain-like devices are toxic, so they would not be able to contact living cells in any way.

Why It Matters: With new high-tech concepts like self-driving cars, drones and robots, we are reaching the limits of what silicon chips can efficiently do in terms of data processing and storage. For these next-generation technologies, a new computing paradigm is needed. Neuromorphic devices mimic processing capabilities of the brain, a powerful computer for immersive tasks.

“Biocompatibility, flexibility, and softness of our artificial synapses is essential,” said Dmitry Kireev, a post-doctoral researcher who co-led the project. “In the future, we envision their direct integration with the human brain, paving the way for futuristic brain prosthesis.”

Will It Really Happen: Neuromorphic platforms are starting to become more common. Leading chipmakers such as Intel and Samsung have either produced neuromorphic chips already or are in the process of developing them. However, current chip materials place limitations on what neuromorphic devices can do, so academic researchers are working hard to find the perfect materials for soft brain-like computers.

“It’s still a big open space when it comes to materials; it hasn’t been narrowed down to the next big solution to try,” Incorvia said. “And it might not be narrowed down to just one solution, with different materials making more sense for different applications.”

The Team: The research was led by Incorvia and Deji Akinwande, professor in the Department of Electrical and Computer Engineering. The two have collaborated many times together in the past, and Akinwande is a leading expert in graphene, using it in multiple research breakthroughs, most recently as part of a wearable electronic tattoo for blood pressure monitoring.

The idea for the project was conceived by Samuel Liu, a Ph.D. student and first author on the paper, in a class taught by Akinwande. Kireev then suggested the specific project. Harrison Jin, an undergraduate electrical and computer engineering student, measured the devices and analyzed data.

The team collaborated with T. Patrick Xiao and Christopher Bennett of Sandia National Laboratories, who ran neural network simulations and analyzed the resulting data.

Here’s a link to and a citation for the ‘graphene transistor’ paper,

Metaplastic and energy-efficient biocompatible graphene artificial synaptic transistors for enhanced accuracy neuromorphic computing by Dmitry Kireev, Samuel Liu, Harrison Jin, T. Patrick Xiao, Christopher H. Bennett, Deji Akinwande & Jean Anne C. Incorvia. Nature Communications volume 13, Article number: 4386 (2022) DOI: https://doi.org/10.1038/s41467-022-32078-6 Published: 28 July 2022

This paper is open access.

Neuromorphic computing and liquid-light interaction

Simulation result of light affecting liquid geometry, which in turn affects reflection and transmission properties of the optical mode, thus constituting a two-way light–liquid interaction mechanism. The degree of deformation serves as an optical memory allowing to store the power magnitude of the previous optical pulse and use fluid dynamics to affect the subsequent optical pulse at the same actuation region, thus constituting an architecture where memory is part of the computation process. Credit: Gao et al., doi 10.1117/1.AP.4.4.046005

This is a fascinating approach to neuromorphic (brainlike) computing and given my recent post (August 29, 2022) about human cells being incorporated into computer chips, it’s part o my recent spate of posts about neuromorphic computing. From a July 25, 2022 news item on phys.org,

Sunlight sparkling on water evokes the rich phenomena of liquid-light interaction, spanning spatial and temporal scales. While the dynamics of liquids have fascinated researchers for decades, the rise of neuromorphic computing has sparked significant efforts to develop new, unconventional computational schemes based on recurrent neural networks, crucial to supporting wide range of modern technological applications, such as pattern recognition and autonomous driving. As biological neurons also rely on a liquid environment, a convergence may be attained by bringing nanoscale nonlinear fluid dynamics to neuromorphic computing.

A July 25, 2022 SPIE (International Society for Optics and Photonics) press release (also on EurekAlert), which originated the news item,

Researchers from University of California San Diego recently proposed a novel paradigm where liquids, which usually do not strongly interact with light on a micro- or nanoscale, support significant nonlinear response to optical fields. As reported in Advanced Photonics, the researchers predict a substantial light–liquid interaction effect through a proposed nanoscale gold patch operating as an optical heater and generating thickness changes in a liquid film covering the waveguide.

The liquid film functions as an optical memory. Here’s how it works: Light in the waveguide affects the geometry of the liquid surface, while changes in the shape of the liquid surface affect the properties of the optical mode in the waveguide, thus constituting a mutual coupling between the optical mode and the liquid film. Importantly, as the liquid geometry changes, the properties of the optical mode undergo a nonlinear response; after the optical pulse stops, the magnitude of liquid film’s deformation indicates the power of the previous optical pulse.

Remarkably, unlike traditional computational approaches, the nonlinear response and the memory reside at the same spatial region, thus suggesting realization of a compact (beyond von-Neumann) architecture where memory and computational unit occupy the same space. The researchers demonstrate that the combination of memory and nonlinearity allow the possibility of “reservoir computing” capable of performing digital and analog tasks, such as nonlinear logic gates and handwritten image recognition.

Their model also exploits another significant liquid feature: nonlocality. This enables them to predict computation enhancement that is simply not possible in solid state material platforms with limited nonlocal spatial scale. Despite nonlocality, the model does not quite achieve the levels of modern solid-state optics-based reservoir computing systems, yet the work nonetheless presents a clear roadmap for future experimental works aiming to validate the predicted effects and explore intricate coupling mechanisms of various physical processes in a liquid environment for computation.

Using multiphysics simulations to investigate coupling between light, fluid dynamics, heat transport, and surface tension effects, the researchers predict a family of novel nonlinear and nonlocal optical effects. They go a step further by indicating how these can be used to realize versatile, nonconventional computational platforms. Taking advantage of a mature silicon photonics platform, they suggest improvements to state-of-the-art liquid-assisted computation platforms by around five orders magnitude in space and at least two orders of magnitude in speed.

Here’s a link to and a citation for the paper,

Thin liquid film as an optical nonlinear-nonlocal medium and memory element in integrated optofluidic reservoir computer by Chengkuan Gao, Prabhav Gaur, Shimon Rubin, Yeshaiahu Fainman. Advanced Photonics, 4(4), 046005 (2022). https://doi.org/10.1117/1.AP.4.4.046005 Published: 1 July 2022

This paper is open access.