Category Archives: neuromorphic engineering

New approach to brain-inspired (neuromorphic) computing: measuring information transfer

An April 8, 2024 news item on Nanowerk announces a new approach to neuromorphic computing that involves measurement, Note: Links have been removed,

The biological brain, especially the human brain, is a desirable computing system that consumes little energy and runs at high efficiency. To build a computing system just as good, many neuromorphic scientists focus on designing hardware components intended to mimic the elusive learning mechanism of the brain. Recently, a research team has approached the goal from a different angle, focusing on measuring information transfer instead.

Their method went through biological and simulation experiments and then proved effective in an electronic neuromorphic system. It was published in Intelligent Computing (“Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics”).

An April 8, 2024 Intelligent Computing news release on EurekAlert delves further into the topic,

Although electronic systems have not fully replicated the complex information transfer between synapses and neurons, the team has demonstrated that it is possible to transform biological circuits into electronic circuits while maintaining the amount of information transferred. “This represents a key step toward brain-inspired low-power artificial systems,” the authors note.

To evaluate the efficiency of information transfer, the team drew inspiration from information theory. They quantified the amount of information conveyed by synapses in single neurons, then measured the quantity using mutual information, the analysis of which reveals the relationship between input stimuli and neuron responses.

First, the team conducted experiments with biological neurons. They used brain slices from rats, recording and analyzing the biological circuits in cerebellar granule cells. Then they evaluated the information transmitted at the synapses from mossy fiber neurons to the cerebellar granule cells. The mossy fibers were periodically stimulated with electrical spikes to induce synaptic plasticity, a fundamental biological feature where the information transfer at the synapses is constantly strengthened or weakened with repeated neuronal activity.

The results show that the changes in mutual information values are largely consistent with the changes in biological information transfer induced by synaptic plasticity. The findings from simulation and electronic neuromorphic experiments mirrored the biological results.

Second, the team conducted experiments with simulated neurons. They applied a spiking neural network model, which was developed by the same research group. Spiking neural networks were inspired by the functioning of biological neurons and are considered a promising approach for achieving efficient neuromorphic computing.

In the model, four mossy fibers are connected to one cerebellar granule cell, and each connection is given a random weight, which affects the information transfer efficiency like synaptic plasticity does in biological circuits. In the experiments, the team applied eight stimulation patterns to all mossy fibers and recorded the responses to evaluate the information transfer in the artificial neural network.

Third, the team conducted experiments with electronic neurons. A setup similar to those in the biological and simulation experiments was used. A previously developed semiconductor device functioned as a neuron, and four specialized memristors functioned as synapses. The team applied 20 spike sequences to decrease resistance values, then applied another 20 to increase them. The changes in resistance values were investigated to assess the information transfer efficiency within the neuromorphic system.

In addition to verifying the quantity of information transferred in biological, simulated and electronic neurons, the team also highlighted the importance of spike timing, which as they observed is closely related to information transfer. This observation could influence the development of neuromorphic computing, given that most devices are designed with spike-frequency-based algorithms.

Here’s a link to and a citation for the paper,

Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics by Daniela Gandolfi, Lorenzo Benatti, Tommaso Zanotti, Giulia M. Boiani, Albertino Bigiani, Francesco M. Puglisi, and Jonathan Mapell. Intelligent Computing 1 Feb 2024 Vol 3 Article ID: 0059 DOI: 10.34133/icomputing.0059

This paper is open access.

Brain-inspired (neuromorphic) wireless system for gathering data from sensors the size of a grain of salt

This is what a sensor the size of a grain of salt looks like,

Caption: The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. Credit: Nick Dentamaro/Brown University

A March 19, 2024 news item on Nanowerk announces this research from Brown University (Rhode Island, US), Note: A link has been removed,

Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.

Writing in Nature Electronics (“An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors”), the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.

One of the potential applications is for brain (neural) implants,

Caption: Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt. Credit: Nick Dentamaro/Brown University

A March 19, 2024 Brown University news release (also on EurekAlert), which originated the news item, provides more detail about the research, Note: Links have been removed,

The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.

“Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”

This radiofrequency [sic] transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.

The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.

“We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”

That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

“This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”

The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.

The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.

The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.

“These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”

The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.

The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.

“The current work provides a methodology we can further build on,” Lee said.

Here’s a link to and a citation for the study,

An asynchronous wireless network for capturing event-driven data from large populations of autonomous sensors by Jihun Lee, Ah-Hyoung Lee, Vincent Leung, Farah Laiwalla, Miguel Angel Lopez-Gordo, Lawrence Larson & Arto Nurmikko. Nature Electronics volume 7, pages 313–324 (2024) DOI: https://doi.org/10.1038/s41928-024-01134-y Published: 19 March 2024 Issue Date: April 2024

This paper is behind a paywall.

Prior to this, 2021 seems to have been a banner year for Nurmikko’s lab. There’s this August 12, 2021 Brown University news release touting publication of a then new study in Nature Electronics and I have an April 2, 2021 post, “BrainGate demonstrates a high-bandwidth wireless brain-computer interface (BCI),” touting an earlier 2021 published study from the lab.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Butterfly mating inspires neuromorphic (brainlike) computing

Michael Berger writes about a multisensory approach to neuromorphic computing inspired by butterflies in his February 2, 2024 Nanowerk Spotlight article, Note: Links have been removed,

Artificial intelligence systems have historically struggled to integrate and interpret information from multiple senses the way animals intuitively do. Humans and other species rely on combining sight, sound, touch, taste and smell to better understand their surroundings and make decisions. However, the field of neuromorphic computing has largely focused on processing data from individual senses separately.

This unisensory approach stems in part from the lack of miniaturized hardware able to co-locate different sensing modules and enable in-sensor and near-sensor processing. Recent efforts have targeted fusing visual and tactile data. However, visuochemical integration, which merges visual and chemical information to emulate complex sensory processing such as that seen in nature—for instance, butterflies integrating visual signals with chemical cues for mating decisions—remains relatively unexplored. Smell can potentially alter visual perception, yet current AI leans heavily on visual inputs alone, missing a key aspect of biological cognition.

Now, researchers at Penn State University have developed bio-inspired hardware that embraces heterogeneous integration of nanomaterials to allow the co-location of chemical and visual sensors along with computing elements. This facilitates efficient visuochemical information processing and decision-making, taking cues from the courtship behaviors of a species of tropical butterfly.

In the paper published in Advanced Materials (“A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues”), the researchers describe creating their visuochemical integration platform inspired by Heliconius butterflies. During mating, female butterflies rely on integrating visual signals like wing color from males along with chemical pheromones to select partners. Specialized neurons combine these visual and chemical cues to enable informed mate choice.

To emulate this capability, the team constructed hardware encompassing monolayer molybdenum disulfide (MoS2) memtransistors serving as visual capture and processing components. Meanwhile, graphene chemitransistors functioned as artificial olfactory receptors. Together, these nanomaterials provided the sensing, memory and computing elements necessary for visuochemical integration in a compact architecture.

While mating butterflies served as inspiration, the developed technology has much wider relevance. It represents a significant step toward overcoming the reliance of artificial intelligence on single data modalities. Enabling integration of multiple senses can greatly improve situational understanding and decision-making for autonomous robots, vehicles, monitoring devices and other systems interacting with complex environments.

The work also helps progress neuromorphic computing approaches seeking to emulate biological brains for next-generation ML acceleration, edge deployment and reduced power consumption. In nature, cross-modal learning underpins animals’ adaptable behavior and intelligence emerging from brains organizing sensory inputs into unified percepts. This research provides a blueprint for hardware co-locating sensors and processors to more closely replicate such capabilities

It’s fascinating to me how many times butterflies inspire science,

Butterfly-inspired visuo-chemical integration. a) A simplified abstraction of visual and chemical stimuli from male butterflies and visuo-chemical integration pathway in female butterflies. b) Butterfly-inspired neuromorphic hardware comprising of monolayer MoS2 memtransistor-based visual afferent neuron, graphene-based chemoreceptor neuron, and MoS2 memtransistor-based neuro-mimetic mating circuits. Courtesy: Wiley/Penn State University Researchers

Here’s a link to and a citation for the paper,

A Butterfly-Inspired Multisensory Neuromorphic Platform for Integration of Visual and Chemical Cues by Yikai Zheng, Subir Ghosh, Saptarshi Das. Advanced Materials SOI: https://doi.org/10.1002/adma.202307380 First published: 09 December 2023

This paper is open access.

Brainlike transistor and human intelligence

This brainlike transistor (not a memristor) is important because it functions at room temperature as opposed to others, which require cryogenic temperatures.

A December 20, 2023 Northwestern University news release (received via email; also on EurekAlert) fills in the details,

  • Researchers develop transistor that simultaneously processes and stores information like the human brain
  • Transistor goes beyond categorization tasks to perform associative learning
  • Transistor identified similar patterns, even when given imperfect input
  • Previous similar devices could only operate at cryogenic temperatures; new transistor operates at room temperature, making it more practical

EVANSTON, Ill. — Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study was published today (Dec. 20 [2023]) in the journal Nature.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Here’s a link to and a citation for the paper,

Moiré synaptic transistor with room-temperature neuromorphic functionality by Xiaodong Yan, Zhiren Zheng, Vinod K. Sangwan, Justin H. Qian, Xueqiao Wang, Stephanie E. Liu, Kenji Watanabe, Takashi Taniguchi, Su-Yang Xu, Pablo Jarillo-Herrero, Qiong Ma & Mark C. Hersam. Nature volume 624, pages 551–556 (2023) DOI: https://doi.org/10.1038/s41586-023-06791-1 Published online: 20 December 2023 Issue Date: 21 December 2023

This paper is behind a paywall.

Striking similarity between memory processing of artificial intelligence (AI) models and hippocampus of the human brain

A December 18, 2023 news item on ScienceDaily shifted my focus from hardware to software when considering memory in brainlike (neuromorphic) computing,

An interdisciplinary team consisting of researchers from the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS) [Korea] revealed a striking similarity between the memory processing of artificial intelligence (AI) models and the hippocampus of the human brain. This new finding provides a novel perspective on memory consolidation, which is a process that transforms short-term memories into long-term ones, in AI systems.

A November 28 (?), 2023 IBS press release (also on EurekAlert but published December 18, 2023, which originated the news item, describes how the team went about its research,

In the race towards developing Artificial General Intelligence (AGI), with influential entities like OpenAI and Google DeepMind leading the way, understanding and replicating human-like intelligence has become an important research interest. Central to these technological advancements is the Transformer model [Figure 1], whose fundamental principles are now being explored in new depth.

The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.

The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside, substances are allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper’s (the magnesium ion) role in the whole process is quite specific.

The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain’s NMDA receptor [see Figure 1]. This revelation led the researchers to investigate if the Transformer’s memory consolidation can be controlled by a mechanism similar to the NMDA receptor’s gating process.

In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor. Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer’s parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience.

C. Justin LEE, who is a neuroscientist director at the institute, said, “This research makes a crucial step in advancing AI and neuroscience. It allows us to delve deeper into the brain’s operating principles and develop more advanced AI systems based on these insights.”

CHA Meeyoung, who is a data scientist in the team and at KAIST [Korea Advanced Institute of Science and Technology], notes, “The human brain is remarkable in how it operates with minimal energy, unlike the large AI models that need immense resources. Our work opens up new possibilities for low-cost, high-performance AI systems that learn and remember information like humans.”

What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.

Fig. 1: (a) Diagram illustrating the ion channel activity in post-synaptic neurons. AMPA receptors are involved in the activation of post-synaptic neurons, while NMDA receptors are blocked by magnesium ions (Mg²⁺) but induce synaptic plasticity through the influx of calcium ions (Ca²⁺) when the post-synaptic neuron is sufficiently activated. (b) Flow diagram representing the computational process within the Transformer AI model. Information is processed sequentially through stages such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is very similar to the nonlinearity of the feed-forward layer. The input-output graph, based on the concentration of magnesium (α), shows the changes in the nonlinearity of the NMDA receptors. Courtesy: IBS

This research was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) before being published in the proceedings, I found a PDF of the presentation and an early online copy of the paper before locating the paper in the published proceedings.

PDF of presentation: Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity

PDF copy of paper:

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong-Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee.

This paper was made available on OpenReview.net:

OpenReview is a platform for open peer review, open publishing, open access, open discussion, open recommendations, open directory, open API and open source.

It’s not clear to me if this paper is finalized or not and I don’t know if its presence on OpenReview constitutes publication.

Finally, the paper published in the proceedings,

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity by Dong Kyum Kim, Jea Kwon, Meeyoung Cha, C. Justin Lee. Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

This link will take you to the abstract, access the paper by clicking on the Paper tab.

Brain-inspired (neuromrophic) computing with twisted magnets and a patent for manufacturing permanent magnets without rare earths

I have two news bits both of them concerned with magnets.

Patent for magnets that can be made without rare earths

I’m starting with the patent news first since this is (as the company notes in its news release) a “Landmark Patent Issued for Technology Critically Needed to Combat Chinese Monopoly.”

For those who don’t know, China supplies most of the rare earths used in computers, smart phones, and other devices. On general principles, having a single supplier dominate production of and access to a necessary material for devices that most of us rely on can raise tensions. Plus, you can’t mine for resources forever.

This December 19, 2023 Nanocrystal Technology LP news release heralds an exciting development (for the impatient, further down the page I have highlighted the salient sections),

Nanotechnology Discovery by 2023 Nobel Prize Winner Became Launch Pad to Create Permanent Magnets without Rare Earths from China

NEW YORK, NY, UNITED STATES, December 19, 2023 /EINPresswire.com/ — Integrated Nano-Magnetics Corp, a wholly owned subsidiary of Nanocrystal Technology LP, was awarded a patent for technology built upon a fundamental nanoscience discovery made by Aleksey Yekimov, its former Chief Scientific Officer.

This patent will enable the creation of strong permanent magnets which are critically needed for both industrial and military applications but cannot be manufactured without certain “rare earth” elements available mostly from China.

At a glittering awards ceremony held in Stockholm on December10, 2023, three scientists, Aleksey Yekimov, Louis Brus (Professor at Columbia University) and Moungi Bawendi (Professor at MIT) were honored with the Nobel Prize in Chemistry for their discovery of the “quantum dot” which is now fueling practical applications in tuning the colors of LEDs, increasing the resolution of TV screens, and improving MRI imaging.

As stated by the Royal Swedish Academy of Sciences, “Quantum dots are … bringing the greatest benefits to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells, and encrypted quantum communications – so we have just started exploring the potential of these tiny particles.”

Aleksey Yekimov worked for over 19 years until his retirement as Chief Scientific Officer of Nanocrystals Technology LP, an R & D company in New York founded by two Indian-American entrepreneurs, Rameshwar Bhargava and Rajan Pillai.

Yekimov, who was born in Russia, had already received the highest scientific honors for his work before he immigrated to USA in 1999. Yekimov was greatly intrigued by Nanocrystal Technology’s research project and chose to join the company as its Chief Scientific Officer.

During its early years, the company worked on efficient light generation by doping host nanoparticles about the same size as a quantum dot with an additional impurity atom. Bhargava came up with the novel idea of incorporating a single impurity atom, a dopant, into a quantum dot sized host, and thus achieve an extraordinary change in the host material’s properties such as inducing strong permanent magnetism in weak, readily available paramagnetic materials. To get a sense of the scale at which nanotechnology works, and as vividly illustrated by the Nobel Foundation, the difference in size between a quantum dot and a soccer ball is about the same as the difference between a soccer ball and planet Earth.

Currently, strong permanent magnets are manufactured from “rare earths” available mostly in China which has established a near monopoly on the supply of rare-earth based strong permanent magnets. Permanent magnets are a fundamental building block for electro-mechanical devices such as motors found in all automobiles including electric vehicles, trucks and tractors, military tanks, wind turbines, aircraft engines, missiles, etc. They are also required for the efficient functioning of audio equipment such as speakers and cell phones as well as certain magnetic storage media.

The existing market for permanent magnets is $28 billion and is projected to reach $50 billion by 2030 in view of the huge increase in usage of electric vehicles. China’s overwhelming dominance in this field has become a matter of great concern to governments of all Western and other industrialized nations. As the Wall St. Journal put it, China’s now has a “stranglehold” on the economies and security of other countries.

The possibility of making permanent magnets without the use of any rare earths mined in China has intrigued leading physicists and chemists for nearly 30 years. On December 19, 2023, a U.S. patent with the title ‘’Strong Non Rare Earth Permanent Magnets from Double Doped Magnetic Nanoparticles” was granted to Integrated Nano-Magnetics Corp. [emphasis mine] Referring to this major accomplishment Bhargava said, “The pioneering work done by Yekimov, Brus and Bawendi has provided the foundation for us to make other discoveries in nanotechnology which will be of great benefit to the world.”

I was not able to find any company websites. The best I could find is a Nanocrystals Technology LinkedIn webpage and some limited corporate data for Integrated Nano-Magnetics on opencorporates.com.

Twisted magnets and brain-inspired computing

This research offers a pathway to neuromorphic (brainlike) computing with chiral (or twisted) magnets, which, as best as I understand it, do not require rare earths. From a November13, 2023 news item on ScienceDaily,

A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL [University College London] and Imperial College London [ICL] researchers.

In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.

A November 9, 2023 UCL press release (also on EurekAlert but published November 13, 2023), which originated the news item, fill s in a few more details about the research,

Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.

“The next step is to identify materials and device architectures that are commercially viable and scalable.”

Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide.

Physical reservoir computing is one of several neuromorphic (or brain inspired) approaches that aims to remove the need for distinct memory and processing units, facilitating more efficient ways to process data. In addition to being a more sustainable alternative to conventional computing, physical reservoir computing could be integrated into existing circuitry to provide additional capabilities that are also energy efficient.

In the study, involving researchers in Japan and Germany, the team used a vector network analyser to determine the energy absorption of chiral magnets at different magnetic field strengths and temperatures ranging from -269 °C to room temperature.

They found that different magnetic phases of chiral magnets excelled at different types of computing task. The skyrmion phase, where magnetised particles are swirling in a vortex-like pattern, had a potent memory capacity apt for forecasting tasks. The conical phase, meanwhile, had little memory, but its non-linearity was ideal for transformation tasks and classification – for instance, identifying if an animal is a cat or dog.

Co-author Dr Jack Gartside, of Imperial College London, said: “Our collaborators at UCL in the group of Professor Hidekazu Kurebayashi recently identified a promising set of materials for powering unconventional computing. These materials are special as they can support an especially rich and varied range of magnetic textures. Working with the lead author Dr Oscar Lee, the Imperial College London group [led by Dr Gartside, Kilian Stenning and Professor Will Branford] designed a neuromorphic computing architecture to leverage the complex material properties to match the demands of a diverse set of challenging tasks. This gave great results, and showed how reconfiguring physical phases can directly tailor neuromorphic computing performance.”

The work also involved researchers at the University of Tokyo and Technische Universität München and was supported by the Leverhulme Trust, Engineering and Physical Sciences Research Council (EPSRC), Imperial College London President’s Excellence Fund for Frontier Research, Royal Academy of Engineering, the Japan Science and Technology Agency, Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Here’s a link to and a citation for the paper,

Task-adaptive physical reservoir computing by Oscar Lee, Tianyi Wei, Kilian D. Stenning, Jack C. Gartside, Dan Prestwood, Shinichiro Seki, Aisha Aqeel, Kosuke Karube, Naoya Kanazawa, Yasujiro Taguchi, Christian Back, Yoshinori Tokura, Will R. Branford & Hidekazu Kurebayashi. Nature Materials volume 23, pages 79–87 (2024) DOI: https://doi.org/10.1038/s41563-023-01698-8 Published online: 13 November 2023 Issue Date: January 2024

This paper is open access.

Physical neural network based on nanowires can learn and remember ‘on the fly’

A November 1, 2023 news item on Nanowerk announced new work on neuromorphic engineering from Australia,

For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.

The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.

Key Takeaways
*The nanowire-based system can learn and remember ‘on the fly,’ processing dynamic, streaming data for complex learning and memory tasks.

*This advancement overcomes the challenge of heavy memory and energy usage commonly associated with conventional machine learning models.

*The technology achieved a 93.4% accuracy rate in image recognition tasks, using real-time data from the MNIST database of handwritten digits.

*The findings promise a new direction for creating efficient, low-energy machine intelligence applications, such as real-time sensor data processing.

Nanowire neural network
Caption: Electron microscope image of the nanowire neural network that arranges itself like ‘Pick Up Sticks’. The junctions where the nanowires overlap act in a way similar to how our brain’s synapses operate, responding to electric current. Credit: The University of Sydney

A November 1, 2023 University of Sydney news release (also on EurekAlert), which originated the news item, elaborates on the research,

Published today [November 1, 2023] in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.

Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”

Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.

Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.

In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information.

Supervising researcher Professor Zdenka Kuncic said the memory task was similar to remembering a phone number. The network was also used to perform a benchmark image recognition task, accessing images in the MNIST database of handwritten digits, a collection of 70,000 small greyscale images used in machine learning.

“Our previous research established the ability of nanowire networks to remember simple tasks. This work has extended these findings by showing tasks can be performed using dynamic data accessed online,” she said.

“This is a significant step forward as achieving an online learning capability is challenging when dealing with large amounts of data that can be continuously changing. A standard approach would be to store data in memory and then train a machine learning model using that stored information. But this would chew up too much energy for widespread application.

“Our novel approach allows the nanowire neural network to learn and remember ‘on the fly’, sample by sample, extracting data online, thus avoiding heavy memory and energy usage.”

Mr Zhu said there were other advantages when processing information online.

“If the data is being streamed continuously, such as it would be from a sensor for instance, machine learning that relied on artificial neural networks would need to have the ability to adapt in real-time, which they are currently not optimised for,” he said.

In this study, the nanowire neural network displayed a benchmark machine learning capability, scoring 93.4 percent in correctly identifying test images. The memory task involved recalling sequences of up to eight digits. For both tasks, data was streamed into the network to demonstrate its capacity for online learning and to show how memory enhances that learning.

Here’s a link to and a citation for the paper,

Online dynamical learning and sequence memory with neuromorphic nanowire networks by Ruomin Zhu, Sam Lilak, Alon Loeffler, Joseph Lizier, Adam Stieg, James Gimzewski & Zdenka Kuncic. Nature Communications volume 14, Article number: 6697 (2023) DOI: https://doi.org/10.1038/s41467-023-42470-5 Published: 01 November 2023

This paper is open access.

You’ll notice a number of this team’s members are also listed in the citation in my June 21, 2023 posting “Learning and remembering like a human brain: nanowire networks” and you’ll see some familiar names in the citation in my June 17, 2020 posting “A tangle of silver nanowires for brain-like action.”

Adaptive neural connectivity with an event-based architecture using photonic processors

On first glance it looked like a set of matches. If there were more dimension, this could also have been a set pencils but no,

Caption: The chip contains almost 8,400 functioning artificial neurons from waveguide-coupled phase-change material. The researchers trained this neural network to distinguish between German and English texts on the basis of vowel frequency. Credit: Jonas Schütte / Pernice Group Courtesy: University of Münster

An October 23, 2023 news item on Nanowerk introduces research into a new approach to optical neural networks

A team of researchers headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster, has developed a so-called event-based architecture, using photonic processors. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network.

Key Takeaways

Researchers have created a new computing architecture that mimics biological neural networks, using photonic processors for data transportation and processing.

The new system enables continuous adaptation of connections within the neural network, crucial for learning processes. This is known as both synaptic and structural plasticity.

Unlike traditional studies, the connections or synapses in this photonic neural network are not hardware-based but are coded based on optical pulse properties, allowing for a single chip to hold several thousand neurons.

Light-based processors in this system offer a much higher bandwidth and lower energy consumption compared to traditional electronic processors.

The researchers successfully tested the system using an evolutionary algorithm to differentiate between German and English texts based on vowel count, highlighting its potential for rapid and energy-efficient AI applications.

The Research

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits.

The person who edited the original press release, which is included in the news item in the above, is not credited.

Here’s the unedited original October 23, 2023 University of Münster press release (also on EurekAlert)

Modern computer models – for example for complex, potent AI applications – push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes. For the purposes of the study, a team working at Collaborative Research Centre 1459 (“Intelligent Matter”) – headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster – joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal “Science Advances”.

What is needed for a neural network in machine learning are artificial neurons which are activated by external excitatory signals, and which have connections to other neurons. The connections between these artificial neurons are called synapses – just like the biological original. For their study, the team of researchers in Münster used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material, and the team showed that the connection between two each of these neurons can indeed become stronger or weaker (synaptic plasticity), and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses – in other words, as a result of the respective wavelength and of the intensity of the optical pulse. This made it possible to integrate several thousand neurons on one single chip and connect them optically.

In comparison with traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks, and with lower energy consumption. This new approach consists of basic research. “Our aim is to develop an optical computing architecture which in the long term will make it possible to compute AI applications in a rapid and energy-efficient way,” says Frank Brückerhoff-Plückelmann, one of the lead authors.

Methodology: The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts. The recognition parameter they used was the number of vowels in the text.

The researchers received financial support from the German Research Association, the European Commission and “UK Research and Innovation”.

Here’s a link to and a citation for the paper,

Event-driven adaptive optical neural network by Frank Brückerhoff-Plückelmann, Ivonne Bente, Marlon Becker, Niklas Vollmar, Nikolaos Farmakidis, Emma Lomonte, Francesco Lenzini, C. David Wright, Harish Bhaskaran, Martin Salinga, Benjamin Risse, and Wolfram H. P. Pernice. Science Advances 20 Oct 2023 Vol 9, Issue 42 DOI: 10.1126/sciadv.adi9127

This paper is open access.

Living technology possibilities

Before launching into the possibilities, here are two descriptions of ‘living technology’ from the European Centre for Living Technology’s (ECLT) homepage,

Goals

Promote, carry out and coordinate research activities and the diffusion of scientific results in the field of living technology. The scientific areas for living technology are the nano-bio-technologies, self-organizing and evolving information and production technologies, and adaptive complex systems.

History

Founded in 2004 the European Centre for Living Technology is an international and interdisciplinary research centre established as an inter-university consortium, currently involving 18 European and extra-European institutional affiliates.

The Centre is devoted to the study of technologies that exhibit life-like properties including self-organization, adaptability and the capacity to evolve.

Despite the reference to “nano-bio-technologies,” this October 11, 2023 news item on ScienceDaily focuses on microscale living technology,

It is noIn a recent article in the high-profile journal “Advanced Materials,” researchers in Chemnitz show just how close and necessary the transition to sustainable living technology is, based on the morphogenesis of self-assembling microelectronic modules, strengthening the recent membership of Chemnitz University of Technology with the European Centre for Living Technology (ECLT) in Venice.

An October 11, 2023 Chemnitz University of Technology (Technische Universität Chemnitz; TU Chemnitz) press release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

It is now apparent that the mass-produced artefacts of technology in our increasingly densely populated world – whether electronic devices, cars, batteries, phones, household appliances, or industrial robots – are increasingly at odds with the sustainable bounded ecosystems achieved by living organisms based on cells over millions of years. Cells provide organisms with soft and sustainable environmental interactions with complete recycling of material components, except in a few notable cases like the creation of oxygen in the atmosphere, and of the fossil fuel reserves of oil and coal (as a result of missing biocatalysts). However, the fantastic information content of biological cells (gigabits of information in DNA alone) and the complexities of protein biochemistry for metabolism seem to place a cellular approach well beyond the current capabilities of technology, and prevent the development of intrinsically sustainable technology.

SMARTLETs: tiny shape-changing modules that collectively self-organize to larger more complex systems

A recent perspective review published in the very high impact journal Advanced Materials this month [October 2023] by researchers at the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) of Chemnitz University of Technology, shows how a novel form of high-information-content Living Technology is now within reach, based on microrobotic electronic modules called SMARTLETs, which will soon be capable of self-assembling into complex artificial organisms. The research belongs to the new field of Microelectronic Morphogenesis, the creation of form under microelectronic control, and builds on work over the previous years at Chemnitz University of Technology to construct self-folding and self-locomoting thin film electronic modules, now carrying tiny silicon chiplets between the folds, for a massive increase in information processing capabilities. Sufficient information can now be stored in each module to encode not only complex functions but fabrication recipes (electronic genomes) for clean rooms to allow the modules to be copied and evolved like cells, but safely because of the gating of reproduction through human operated clean room facilities.

Electrical self-awareness during self-assembly

In addition, the chiplets can provide neuromorphic learning capabilities allowing them to improve performance during operation. A further key feature of the specific self-assembly of these modules, based on matching physical bar codes, is that electrical and fluidic connections can be achieved between modules. These can then be employed, to make the electronic chiplets on board “aware” of the state of assembly, and of potential errors, allowing them to direct repair, correct mis-assembly, induce disassembly and form collective functions spanning many modules. Such functions include extended communication (antennae), power harvesting and redistribution, remote sensing, material redistribution etc.

So why is this technology vital for sustainability?

The complete digital fab description for modules, for which actually only a limited number of types are required even for complex organisms, allows their material content, responsible originator and environmentally relevant exposure all to be read out. Prof. Dagmar Nuissl-Gesmann from the Law Department at Chemnitz University of Technology observes that “this fine-grained documentation of responsibility intrinsic down to microscopic scales will be a game changer in allowing legal assignment of environmental and social responsibility for our technical artefacts”.

Furthermore, the self-locomotion and self-assembly-disassembly capabilities allows the modules to self-sort for recycling. Modules can be regained, reused, reconfigured, and redeployed in different artificial organisms. If they are damaged, then their limited and documented types facilitate efficient custom recycling of materials with established and optimized protocols for these sorted and now identical entities. These capabilities complement the other more obvious advantages in terms of design development and reuse in this novel reconfigurable media. As Prof. Marlen Arnold, an expert in Sustainability of the Faculty of Economics and Business Administration observes, “Even at high volumes of deployment use, these properties could provide this technology with a hitherto unprecedented level of sustainability which would set the bar for future technologies to share our planet safely with us.”

Contribution to European Living Technology

This research is a first contribution of MAIN/Chemnitz University of Technology, as a new member of the European Centre for Living Technology ECLT, based in Venice,” says Prof. Oliver G. Schmidt, Scientific Director of the Research Center MAIN and adds that “It’s fantastic to see that our deep collaboration with ECLT is paying off so quickly with immediate transdisciplinary benefit for several scientific communities.” “Theoretical research at the ECLT has been urgently in need of novel technology systems able to implement the core properties of living systems.” comments Prof. John McCaskill, coauthor of the paper, and a grounding director of the ECLT in 2004.

Here’s a link to and a citation for the researchers’ perspective paper,

Microelectronic Morphogenesis: Smart Materials with Electronics Assembling into Artificial Organisms by John S. McCaskill, Daniil Karnaushenko, Minshen Zhu, Oliver G. Schmidt. Advanced Materials DOI: https://doi.org/10.1002/adma.202306344 First published: 09 October 2023

This paper is open access.