Category Archives: electronics

Shaving the ‘hairs’ off nanocrystals for more efficient electronics

A March 24, 2022 news item on phys.org announced research into nanoscale crystals and how they might be integrated into electronic devices, Note: A link has been removed,

You can carry an entire computer in your pocket today because the technological building blocks have been getting smaller and smaller since the 1950s. But in order to create future generations of electronics—such as more powerful phones, more efficient solar cells, or even quantum computers—scientists will need to come up with entirely new technology at the tiniest scales.

One area of interest is nanocrystals. These tiny crystals can assemble themselves into many configurations, but scientists have had trouble figuring out how to make them talk to each other.  

A new study introduces a breakthrough in making nanocrystals function together electronically. Published March 25 [2022] in Science, the research may open the doors to future devices with new abilities. 

A March 25, 2022 University of Chicago news release (also on EurekAlert but published on March 24, 2022), which originated the news item, expands on the possibilities the research makes possible, Note: Links have been removed,

“We call these super atomic building blocks, because they can grant new abilities—for example, letting cameras see in the infrared range,” said University of Chicago Prof. Dmitri Talapin, the corresponding author of the paper. “But until now, it has been very difficult to both assemble them into structures and have them talk to each other. Now for the first time, we don’t have to choose. This is a transformative improvement.”  

In their paper, the scientists lay out design rules which should allow for the creation of many different types of materials, said Josh Portner, a Ph.D. student in chemistry and one of the first authors of the study. 

A tiny problem

Scientists can grow nanocrystals out of many different materials: metals, semiconductors, and magnets will each yield different properties. But the trouble was that whenever they tried to assemble these nanocrystals together into arrays, the new supercrystals would grow with long “hairs” around them. 

These hairs made it difficult for electrons to jump from one nanocrystal to another. Electrons are the messengers of electronic communication; their ability to move easily along is a key part of any electronic device. 

The researchers needed a method to reduce the hairs around each nanocrystal, so they could pack them in more tightly and reduce the gaps in between. “When these gaps are smaller by just a factor of three, the probability for electrons to jump across is about a billion times higher,” said Talapin, the Ernest DeWitt Burton Distinguished Service Professor of Chemistry and Molecular Engineering at UChicago and a senior scientist at Argonne National Laboratory. “It changes very strongly with distance.”

To shave off the hairs, they sought to understand what was going on at the atomic level. For this, they needed the aid of powerful X-rays at the Center for Nanoscale Materials at Argonne and the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory, as well as powerful simulations and models of the chemistry and physics at play. All these allowed them to understand what was happening at the surface—and find the key to harnessing their production.

Part of the process to grow supercrystals is done in solution—that is, in liquid. It turns out that as the crystals grow, they undergo an unusual transformation in which gas, liquid and solid phases all coexist. By precisely controlling the chemistry of that stage, they could create crystals with harder, slimmer exteriors which could be packed in together much more closely. “Understanding their phase behavior was a massive leap forward for us,” said Portner. 

The full range of applications remains unclear, but the scientists can think of multiple areas where the technique could lead. “For example, perhaps each crystal could be a qubit in a quantum computer; coupling qubits into arrays is one of the fundamental challenges of quantum technology right now,” said Talapin. 

Portner is also interested in exploring the unusual intermediate state of matter seen during supercrystal growth: “Triple phase coexistence like this is rare enough that it’s intriguing to think about how to take advantage of this chemistry and build new materials.”

The study included scientists with the University of Chicago, Technische Universität Dresden, Northwestern University, Arizona State University, SLAC, Lawrence Berkeley National Laboratory, and the University of California, Berkeley.

Here’s a link to and a citation for the paper,

Self-assembly of nanocrystals into strongly electronically coupled all-inorganic supercrystals by Igor Coropceanu, Eric M. Janke, Joshua Portner, Danny Haubold, Trung Dac Nguyen, Avishek Das, Christian P. N. Tanner, James K. Utterback, Samuel W. Teitelbaum¸ Margaret H. Hudson, Nivedina A. Sarma, Alex M. Hinkle, Christopher J. Tassone, Alexander Eychmüller, David T. Limmer, Monica Olvera de la Cruz, Naomi S. Ginsberg and Dmitri V. Talapin. Science • 24 Mar 2022 • Vol 375, Issue 6587 • pp. 1422-1426 • DOI: 10.1126/science.abm6753

This paper is behind a paywall.

Honey-based neuromorphic chips for brainlike computers?

Photo by Mariana Ibanez on Unsplash Courtesy Washington State University

An April 5, 2022 news item on Nanowerk explains the connection between honey and a neuromorphic (brainlike) computer chip, Note: Links have been removed,

Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.

Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too.

In a study published in Journal of Physics D (“Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems”), the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.

An April 5, 2022 Washington State University (WSU) news release (also on EurekAlert) by Sara Zaske, which originated the news item, describes the purpose for the work and details about making chips from honey,

“This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.“This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”

For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.

The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.

Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts.

The human brain has more than 100 billion neurons with more than 1,000 trillion synapses, or connections, among them. Each neuron can both process and store data, which makes the brain much more efficient than a traditional computer, and developers of neuromorphic computing systems aim to mimic that structure.

Several companies, including Intel and IBM, have released neuromorphic chips which have the equivalent of more than 100 million “neurons” per chip, but this is not yet near the number in the brain. Many developers are also still using the same nonrenewable and toxic materials that are currently used in conventional computer chips.

Many researchers, including Zhao’s team, are searching for biodegradable and renewable solutions for use in this promising new type of computing. Zhao is also leading investigations into using proteins and other sugars such as those found in Aloe vera leaves in this capacity, but he sees strong potential in honey.

“Honey does not spoil,” he said. “It has a very low moisture concentration, so bacteria cannot survive in it. This means these computer chips will be very stable and reliable for a very long time.”

The honey memristor chips developed at WSU should tolerate the lower levels of heat generated by neuromorphic systems which do not get as hot as traditional computers. The honey memristors will also cut down on electronic waste.

“When we want to dispose of devices using computer chips made of honey, we can easily dissolve them in water,” he said. “Because of these special properties, honey is very useful for creating renewable and biodegradable neuromorphic systems.”

This also means, Zhao cautioned, that just like conventional computers, users will still have to avoid spilling their coffee on them.

Nice note of humour at the end. There are a few questions, I wonder if the variety of honey (clover, orange blossom, blackberry, etc.) has an impact on the chip’s speed and/or longevity. Also, if someone spilled coffee and the chip melted and a child decided to lap it up, what would happen?

Here’s a link to and a citation for the paper,

Memristive synaptic device based on a natural organic material—honey for spiking neural network in biodegradable neuromorphic systems. Brandon Sueoka and Feng Zhao. Journal of Physics D: Applied Physics, Volume 55, Number 22 (225105) Published 7 March 2022 • © 2022 IOP Publishing Ltd

This paper is behind a paywall.

One of world’s most precise microchip sensors thanks to nanotechnology, machine learning, extended cognition, and spiderwebs

I love science stories about the inspirational qualities of spiderwebs. A November 26, 2021 news item on phys.org describes how spiderwebs have inspired advances in sensors and, potentially, quantum computing,,

A team of researchers from TU Delft [Delft University of Technology; Netherlands] managed to design one of the world’s most precise microchip sensors. The device can function at room temperature—a ‘holy grail’ for quantum technologies and sensing. Combining nanotechnology and machine learning inspired by nature’s spiderwebs, they were able to make a nanomechanical sensor vibrate in extreme isolation from everyday noise. This breakthrough, published in the Advanced Materials Rising Stars Issue, has implications for the study of gravity and dark matter as well as the fields of quantum internet, navigation and sensing.

Inspired by nature’s spider webs and guided by machine learning, Richard Norte (left) and Miguel Bessa (right) demonstrate a new type of sensor in the lab. [Photography: Frank Auperlé]

A November 24, 2021 TU Delft press release (also on EurekAlert but published on November 23, 2021), which originated the news item, describes the research in more detail,

One of the biggest challenges for studying vibrating objects at the smallest scale, like those used in sensors or quantum hardware, is how to keep ambient thermal noise from interacting with their fragile states. Quantum hardware for example is usually kept at near absolute zero (−273.15°C) temperatures, with refrigerators costing half a million euros apiece. Researchers from TU Delft created a web-shaped microchip sensor which resonates extremely well in isolation from room temperature noise. Among other applications, their discovery will make building quantum devices much more affordable.

Hitchhiking on evolution
Richard Norte and Miguel Bessa, who led the research, were looking for new ways to combine nanotechnology and machine learning. How did they come up with the idea to use spiderwebs as a model? Richard Norte: “I’ve been doing this work already for a decade when during lockdown, I noticed a lot of spiderwebs on my terrace. I realised spiderwebs are really good vibration detectors, in that they want to measure vibrations inside the web to find their prey, but not outside of it, like wind through a tree. So why not hitchhike on millions of years of evolution and use a spiderweb as an initial model for an ultra-sensitive device?” 

Since the team did not know anything about spiderwebs’ complexities, they let machine learning guide the discovery process. Miguel Bessa: “We knew that the experiments and simulations were costly and time-consuming, so with my group we decided to use an algorithm called Bayesian optimization, to find a good design using few attempts.” Dongil Shin, co-first author in this work, then implemented the computer model and applied the machine learning algorithm to find the new device design. 

Microchip sensor based on spiderwebs
To the researcher’s surprise, the algorithm proposed a relatively simple spiderweb out of 150 different spiderweb designs, which consists of only six strings put together in a deceivingly simple way. Bessa: “Dongil’s computer simulations showed that this device could work at room temperature, in which atoms vibrate a lot, but still have an incredibly low amount of energy leaking in from the environment – a higher Quality factor in other words. With machine learning and optimization we managed to adapt Richard’s spider web concept towards this much better quality factor.”

Based on this new design, co-first author Andrea Cupertino built a microchip sensor with an ultra-thin, nanometre-thick film of ceramic material called Silicon Nitride. They tested the model by forcefully vibrating the microchip ‘web’ and measuring the time it takes for the vibrations to stop. The result was spectacular: a record-breaking isolated vibration at room temperature. Norte: “We found almost no energy loss outside of our microchip web: the vibrations move in a circle on the inside and don’t touch the outside. This is somewhat like giving someone a single push on a swing, and having them swing on for nearly a century without stopping.”

Implications for fundamental and applied sciences
With their spiderweb-based sensor, the researchers’ show how this interdisciplinary strategy opens a path to new breakthroughs in science, by combining bio-inspired designs, machine learning and nanotechnology. This novel paradigm has interesting implications for quantum internet, sensing, microchip technologies and fundamental physics: exploring ultra-small forces for example, like gravity or dark matter which are notoriously difficult to measure. According to the researchers, the discovery would not have been possible without the university’s Cohesion grant, which led to this collaboration between nanotechnology and machine learning.

Here’s a link to and a citation for the paper,

Spiderweb Nanomechanical Resonators via Bayesian Optimization: Inspired by Nature and Guided by Machine Learning by Dongil Shin, Andrea Cupertino, Matthijs H. J. de Jong, Peter G. Steeneken, Miguel A. Bessa, Richard A. Norte. Advanced Materials Volume34, Issue3 January 20, 2022 2106248 DOI: https://doi.org/10.1002/adma.202106248 First published (online): 25 October 2021

This paper is open access.

If spiderwebs can be sensors, can they also think?

it’s called ‘extended cognition’ or ‘extended mind thesis’ (Wikipedia entry) and the theory holds that the mind is not solely in the brain or even in the body. Predictably, the theory has both its supporters and critics as noted in Joshua Sokol’s article “The Thoughts of a Spiderweb” originally published on May 22, 2017 in Quanta Magazine (Note: Links have been removed),

Millions of years ago, a few spiders abandoned the kind of round webs that the word “spiderweb” calls to mind and started to focus on a new strategy. Before, they would wait for prey to become ensnared in their webs and then walk out to retrieve it. Then they began building horizontal nets to use as a fishing platform. Now their modern descendants, the cobweb spiders, dangle sticky threads below, wait until insects walk by and get snagged, and reel their unlucky victims in.

In 2008, the researcher Hilton Japyassú prompted 12 species of orb spiders collected from all over Brazil to go through this transition again. He waited until the spiders wove an ordinary web. Then he snipped its threads so that the silk drooped to where crickets wandered below. When a cricket got hooked, not all the orb spiders could fully pull it up, as a cobweb spider does. But some could, and all at least began to reel it in with their two front legs.

Their ability to recapitulate the ancient spiders’ innovation got Japyassú, a biologist at the Federal University of Bahia in Brazil, thinking. When the spider was confronted with a problem to solve that it might not have seen before, how did it figure out what to do? “Where is this information?” he said. “Where is it? Is it in her head, or does this information emerge during the interaction with the altered web?”

In February [2017], Japyassú and Kevin Laland, an evolutionary biologist at the University of Saint Andrews, proposed a bold answer to the question. They argued in a review paper, published in the journal Animal Cognition, that a spider’s web is at least an adjustable part of its sensory apparatus, and at most an extension of the spider’s cognitive system.

This would make the web a model example of extended cognition, an idea first proposed by the philosophers Andy Clark and David Chalmers in 1998 to apply to human thought. In accounts of extended cognition, processes like checking a grocery list or rearranging Scrabble tiles in a tray are close enough to memory-retrieval or problem-solving tasks that happen entirely inside the brain that proponents argue they are actually part of a single, larger, “extended” mind.

Among philosophers of mind, that idea has racked up citations, including supporters and critics. And by its very design, Japyassú’s paper, which aims to export extended cognition as a testable idea to the field of animal behavior, is already stirring up antibodies among scientists. …

It seems there is no definitive answer to the question of whether there is an ‘extended mind’ but it’s an intriguing question made (in my opinion) even more so with the spiderweb-inspired sensors from TU Delft.

Save energy with neuromorphic (brainlike) hardware

It seems the appetite for computing power is bottomless, which presents a problem in a world where energy resources are increasingly constrained. A May 24, 2022 news item on ScienceDaily announces research into neuromorphic computing which hints the energy efficiency long promised by the technology may be realized in the foreseeable future,

For the first time TU Graz’s [Graz University of Technology; Austria] Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.

Rich Uhlig, managing director of Intel Labs, holds one of Intel’s Nahuku boards, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019. (Credit: Tim Herman/Intel Corporation)

A May 24, 2022 Graz University of Technology (TU Graz) press release (also on EurekAlert), which originated the news item, delves further into the research, Note: Links have been removed,

The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” [sic] (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.  

Human brain as a role model

Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.

In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.

Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware

“Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication.

“Intel’s Loihi research chips promise to bring gains in AI, especially by lowering their high energy cost,“ said Mike Davies, director of Intel’s Neuromorphic Computing Lab. “Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by re-thinking their implementation from the perspective of biology.”

Mimicking human short-term memory

In their neuromorphic network, the group reproduced a presumed memory mechanism of the brain, as Wolfgang Maass, Philipp Plank’s doctoral supervisor at the Institute of Theoretical Computer Science, explains: “Experimental studies have shown that the human brain can store information for a short period of time even without neural activity, namely in so-called ‘internal variables’ of neurons. Simulations suggest that a fatigue mechanism of a subset of neurons is essential for this short-term memory.”

Direct proof is lacking because these internal variables cannot yet be measured, but it does mean that the network only needs to test which neurons are currently fatigued to reconstruct what information it has previously processed. In other words, previous information is stored in the non-activity of neurons, and non-activity consumes the least energy.

Symbiosis of recurrent and feed-forward network

The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” Many such so-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task at hand. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process ultimately leads to energy savings.

“Recurrent neural structures are expected to provide the greatest gains for applications running on neuromorphic hardware in the future,” said Davies. “Neuromorphic hardware like Loihi is uniquely suited to facilitate the fast, sparse and unpredictable patterns of network activity that we observe in the brain and need for the most energy efficient AI applications.”

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU. For this purpose, the project is creating a permanent digital research infrastructure, EBRAINS. This research work is anchored in the Fields of Expertise Human and Biotechnology and Information, Communication & Computing, two of the five Fields of Expertise of TU Graz.

Here’s a link to and a citation for the paper,

A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware by Arjun Rao, Philipp Plank, Andreas Wild & Wolfgang Maass. Nature Machine Intelligence (2022) DOI: https://doi.org/10.1038/s42256-022-00480-w Published: 19 May 2022

This paper is behind a paywall.

For anyone interested in the EBRAINS project, here’s a description from their About page,

EBRAINS provides digital tools and services which can be used to address challenges in brain research and brain-inspired technology development. Its components are designed with, by, and for researchers. The tools assist scientists to collect, analyse, share, and integrate brain data, and to perform modelling and simulation of brain function.

EBRAINS’ goal is to accelerate the effort to understand human brain function and disease.

This EBRAINS research infrastructure is the entry point for researchers to discover EBRAINS services. The services are being developed and powered by the EU-funded Human Brain Project.

You can register to use the EBRAINS research infrastructure HERE

One last note, the Human Brain Project is a major European Union (EU)-funded science initiative (1B Euros) announced in 2013 and to be paid out over 10 years.

Simulating neurons and synapses with memristive devices

I’ve been meaning to get to this research on ‘neuromorphic memory’ for a while. From a May 20, 2022 news item on Nanowerk,

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated.

However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge.

A May 20, 2022 Korea Advanced Institute of Science and Technology (KAIST) press release (also on EurekAlert), which originated the news item, delves further into the research,

To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. 

The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.

Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”

Here’s a link to and a citation for the paper,

Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse by Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im & Keon Jae Lee. Nature Communications volume 13, Article number: 2811 (2022) DOI https://doi.org/10.1038/s41467-022-30432-2 Published 19 May 2022

This paper is open access.

Memristive control of mutual spin

It may be my imagination but it seems I’m stumbling across more research on neuromorphic (brainlike) computing than usual this year. In May 2022 alone I stumbled across three items. Today (August 24, 2022), here’s a May 14, 2022 news item on Nanowerk describes some work from the University of Gothenburg (Sweden),

Artificial Intelligence (AI) is making it possible for machines to do things that were once considered uniquely human. With AI, computers can use logic to solve problems, make decisions, learn from experience and perform human-like tasks. However, they still cannot do this as effectively and energy efficiently as the human brain.

Research conducted with support from the EU-funded TOPSPIN and SpinAge projects has brought scientists a step closer to achieving this goal.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades,” observes Prof. Johan Åkerman of TOPSIN project host University of Gothenburg, Sweden. “Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” continues Prof. Åkerman, who is also the founder and CEO of SpinAge project partner NanOsc, also in Sweden.

A May 13, 2022 CORDIS press release, which originated the news item, provides more detail,

The research team succeeded in combining a memory function and a calculation function in one component for the very first time. The achievement is described in their study published in the journal ‘Nature Materials’. The memory and calculation functions were combined by linking oscillator networks and memristors – the two main tools needed to carry out advanced calculations. Oscillators are described as oscillating circuits capable of performing calculations. Memristors, short for memory resistors, are electronic devices whose resistance can be programmed and remains stored. In other words, the memristor’s resistance performs a memory function by remembering what value it had when the device was powered on.

A major development

Prof. Åkerman comments on the discovery: “This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

As reported in the news item, Prof. Åkerman believes this achievement will lead to the development of technologies that are faster, easier to use and less energy-consuming. Also, the fact that hundreds of components can fit into an area the size of a single bacterium could have a significant impact on smaller applications. “More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

Prof. Åkerman concludes: “The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.” The TOPSPIN (Topotronic multi-dimensional spin Hall nano-oscillator networks) and SpinAge (Weighted Spintronic-Nano-Oscillator-based Neuromorphic Computing System Assisted by laser for Cognitive Computing) projects end in 2024.

For more information, please see:
TOPSPIN project
SpinAge project

The University of Gothenburg first announced the research in a November 29, 2021 press release on EurekAlert,

Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.

In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the human brain is still unmatched in its ability to perform tasks effectively and energy efficiently.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.

Important breakthrough
Working with a research team at Tohoko University, Åkerman led a study that has now taken an important step forward in achieving this goal. In the study, now published in the highly ranked journal Nature Materials, the researchers succeeded for the first time in linking the two main tools for advanced calculations: oscillator networks and memristors.

Åkerman describes oscillators as oscillating circuits that can perform calculations and that are comparable to human nerve cells. Memristors are programable resistors that can also perform calculations and that have integrated memory. This makes them comparable to memory cells. Integrating the two is a major advancement by the researchers.

“This is an important breakthrough because we show that it is possible to combine a memory function with a calculating function in the same component. These components work more like the brain’s energy-efficient neural networks, allowing them to become important building blocks in future, more brain-like computers.”

Enables energy-efficient technologies
According to Johan Åkerman, the discovery will enable faster, easier to use and less energy consuming technologies in many areas. He feels that it is a huge advantage that the research team has successfully produced the components in an extremely small footprint: hundreds of components fit into an area equivalent to a single bacterium. This can be of particular importance in smaller applications like mobile phones.

“More energy-efficient calculations could lead to new functionality in mobile phones. An example is digital assistants like Siri or Google. Today, all processing is done by servers since the calculations require too much energy for the small size of a phone. If the calculations could instead be performed locally, on the actual phone, they could be done faster and easier without a need to connect to servers.”

He notes self-driving cars and drones as other examples of where more energy-efficient calculations could drive developments.

“The more energy-efficiently that cognitive calculations can be performed, the more applications become possible. That’s why our study really has the potential to advance the field.”

Here’s a link to and a citation for the paper,

Memristive control of mutual spin Hall nano-oscillator synchronization for neuromorphic computing by Mohammad Zahedinejad, Himanshu Fulara, Roman Khymyn, Afshin Houshang, Mykola Dvornik, Shunsuke Fukami, Shun Kanai, Hideo Ohno & Johan Åkerman. Nature Materials volume 21, pages 81–87 (2022) DOI: https://doi.org/10.1038/s41563-021-01153-6 First Published: 29 November 2021 Issue Date: January 2022

This paper is behind a paywall.

Protein wires for nanoelectronics

A February 24, 2022 news item on phys.org describes research into using proteins as electrical conductors,

Proteins are among the most versatile and ubiquitous biomolecules on earth. Nature uses them for everything from building tissues to regulating metabolism to defending the body against disease.

Now, a new study shows that proteins have other, largely unexplored capabilities. Under the right conditions, they can act as tiny, current-carrying wires, useful for a range human-designed nanoelectronics.

….

A February 25, 2022 Arizona State University (ASU) news release (also on EurekAlert but published February 24, 2022), which originated the news item, delves further into the intricacies of nanoelectronics (Note: Links have been removed),

In new research appearing in the journal ACS Nano, Stuart Lindsay and his colleagues show that certain proteins can act as efficient electrical conductors. In fact, these tiny protein wires may have better conductance properties than similar nanowires composed of DNA [deoxyribonucleic acid], which have already met with considerable success for a host of human applications. 

Professor Lindsay directs the Biodesign Center for Single-Molecule Biophysics. He is also professor with ASU’s Department of Physics and the School of Molecular Sciences.

Just as in the case of DNA, proteins offer many attractive properties for nanoscale electronics including stability, tunable conductance and vast information storage capacity. Although proteins had traditionally been regarded as poor conductors of electricity, all that recently changed when Lindsay and his colleagues demonstrated that a protein poised between a pair of electrodes could act as an efficient conductor of electrons.

The new research examines the phenomenon of electron transport through proteins in greater detail. The study results establish that over long distances, protein nanowires display better conductance properties than chemically-synthesized nanowires specifically designed to be conductors. In addition, proteins are self-organizing and allow for atomic-scale control of their constituent parts.

Synthetically designed protein nanowires could give rise to new ultra-tiny electronics, with potential applications for medical sensing and diagnostics, nanorobots to carry out search and destroy missions against diseases or in a new breed of ultra-tiny computer transistors. Lindsay is particularly interested in the potential of protein nanowires for use in new devices to carry out ultra-fast DNA and protein sequencing, an area in which he has already made significant strides.

In addition to their role in nanoelectronic devices, charge transport reactions are crucial in living systems for processes including respiration, metabolism and photosynthesis. Hence, research into transport properties through designed proteins may shed new light on how such processes operate within living organisms.

While proteins have many of the benefits of DNA for nanoelectronics in terms of electrical conductance and self-assembly, the expanded alphabet of 20 amino acids used to construct them offers an enhanced toolkit for nanoarchitects like Lindsay, when compared with just four nucleotides making up DNA.

Transit Authority

Though electron transport has been a focus of considerable research, the nature of the flow of electrons through proteins has remained something of a mystery. Broadly speaking, the process can occur through electron tunneling, a quantum effect occurring over very short distances or through the hopping of electrons along a peptide chain—in the case of proteins, a chain of amino acids.

One objective of the study was to determine which of these regimes seemed to be operating by making quantitative measurements of electrical conductance over different lengths of protein nanowire. The study also describes a mathematical model that can be used to calculate the molecular-electronic properties of proteins.

For the experiments, the researchers used protein segments in four nanometer increments, ranging from 4-20 nanometers in length. A gene was designed to produce these amino acid sequences from a DNA template, with the protein lengths then bonded together into longer molecules. A highly sensitive instrument known as a scanning tunneling microscope was used to make precise measurements of conductance as electron transport progressed through the protein nanowire.

The data show that conductance decreases over nanowire length in a manner consistent with hopping rather than tunneling behavior of the electrons. Specific aromatic amino acid residues, (six tyrosines and one tryptophan in each corkscrew twist of the protein), help guide the electrons along their path from point to point like successive stations along a train route. “The electron transport is sort of like skipping stone across water—the stone hasn’t got time to sink on each skip,” Lindsay says.

Wire wonders

While the conductance values of the protein nanowires decreased over distance, they did so more gradually than with conventional molecular wires specifically designed to be efficient conductors.

When the protein nanowires exceeded six nanometers in length, their conductance outperformed molecular nanowires, opening the door to their use in many new applications. The fact that they can be subtly designed and altered with atomic scale control and self-assembled from a gene template permits fine-tuned manipulations that far exceed what can currently be achieved with conventional transistor design.

One exciting possibility is using such protein nanowires to connect other components in a new suite of nanomachines. For example, nanowires could be used to connect an enzyme known as a DNA polymerase to electrodes, resulting in a device that could potentially sequence an entire human genome at low cost in under an hour. A similar approach could allow the integration of proteosomes into nanoelectronic devices able to read amino acids for protein sequencing.

“We are beginning now to understand the electron transport in these proteins. Once you have quantitative calculations, not only do you have great molecular electronic components, but you have a recipe for designing them,” Lindsay says. “If you think of the SPICE program that electrical engineers use to design circuits, there’s a glimmer now that you could get this for protein electronics.”

Here’s a link to and a citation for the paper,

Electronic Transport in Molecular Wires of Precisely Controlled Length Built from Modular Proteins by Bintian Zhang, Eathen Ryan, Xu Wang, Weisi Song, and Stuart Lindsay. ACS Nano 2022, 16, 1, 1671–1680 DOI: https://doi.org/10.1021/acsnano.1c10830 Publication Date:January 14, 2022 Copyright © 2022 American Chemical Society

This paper is behind a paywall.

An ‘artificial brain’ and life-long learning

Talk of artificial brains (also known as, brainlike computing or neuromorphic computing) usually turns to memory fairly quickly. This February 3, 2022 news item on ScienceDaily does too although the focus is on how memory and forgetting affect the ability to learn,

When the human brain learns something new, it adapts. But when artificial intelligence learns something new, it tends to forget information it already learned.

As companies use more and more data to improve how AI recognizes images, learns languages and carries out other complex tasks, a paper publishing in Science this week shows a way that computer chips could dynamically rewire themselves to take in new data like the brain does, helping AI to keep learning over time.

“The brains of living beings can continuously learn throughout their lifespan. We have now created an artificial platform for machines to learn throughout their lifespan,” said Shriram Ramanathan, a professor in Purdue University’s [Indiana, US] School of Materials Engineering who specializes in discovering how materials could mimic the brain to improve computing.

Unlike the brain, which constantly forms new connections between neurons to enable learning, the circuits on a computer chip don’t change. A circuit that a machine has been using for years isn’t any different than the circuit that was originally built for the machine in a factory.

This is a problem for making AI more portable, such as for autonomous vehicles or robots in space that would have to make decisions on their own in isolated environments. If AI could be embedded directly into hardware rather than just running on software as AI typically does, these machines would be able to operate more efficiently.

A February 3, 2022 Purdue University news release (also on EurekAlert), which originated the news item, provides more technical detail about the work (Note: Links have been removed),

In this study, Ramanathan and his team built a new piece of hardware that can be reprogrammed on demand through electrical pulses. Ramanathan believes that this adaptability would allow the device to take on all of the functions that are necessary to build a brain-inspired computer.

“If we want to build a computer or a machine that is inspired by the brain, then correspondingly, we want to have the ability to continuously program, reprogram and change the chip,” Ramanathan said.

Toward building a brain in chip form

The hardware is a small, rectangular device made of a material called perovskite nickelate,  which is very sensitive to hydrogen. Applying electrical pulses at different voltages allows the device to shuffle a concentration of hydrogen ions in a matter of nanoseconds, creating states that the researchers found could be mapped out to corresponding functions in the brain.

When the device has more hydrogen near its center, for example, it can act as a neuron, a single nerve cell. With less hydrogen at that location, the device serves as a synapse, a connection between neurons, which is what the brain uses to store memory in complex neural circuits.

Through simulations of the experimental data, the Purdue team’s collaborators at Santa Clara University and Portland State University showed that the internal physics of this device creates a dynamic structure for an artificial neural network that is able to more efficiently recognize electrocardiogram patterns and digits compared to static networks. This neural network uses “reservoir computing,” which explains how different parts of a brain communicate and transfer information.

Researchers from The Pennsylvania State University also demonstrated in this study that as new problems are presented, a dynamic network can “pick and choose” which circuits are the best fit for addressing those problems.

Since the team was able to build the device using standard semiconductor-compatible fabrication techniques and operate the device at room temperature, Ramanathan believes that this technique can be readily adopted by the semiconductor industry.

“We demonstrated that this device is very robust,” said Michael Park, a Purdue Ph.D. student in materials engineering. “After programming the device over a million cycles, the reconfiguration of all functions is remarkably reproducible.”

The researchers are working to demonstrate these concepts on large-scale test chips that would be used to build a brain-inspired computer.

Experiments at Purdue were conducted at the FLEX Lab and Birck Nanotechnology Center of Purdue’s Discovery Park. The team’s collaborators at Argonne National Laboratory, the University of Illinois, Brookhaven National Laboratory and the University of Georgia conducted measurements of the device’s properties.

Here’s a link to and a citation for the paper,

Reconfigurable perovskite nickelate electronics for artificial intelligence by Hai-Tian Zhang, Tae Joon Park, A. N. M. Nafiul Islam, Dat S. J. Tran, Sukriti Manna, Qi Wang, Sandip Mondal, Haoming Yu, Suvo Banik, Shaobo Cheng, Hua Zhou, Sampath Gamage, Sayantan Mahapatra, Yimei Zhu, Yohannes Abate, Nan Jiang, Subramanian K. R. S. Sankaranarayanan, Abhronil Sengupta, Christof Teuscher, Shriram Ramanathan. Science • 3 Feb 2022 • Vol 375, Issue 6580 • pp. 533-539 • DOI: 10.1126/science.abj7943

This paper is behind a paywall.

2D materials for a computer’s artificial brain synapses

A January 28, 2022 news item on Nanowerk describes for some of the latest work on hardware that could enable neuromorphic (brainlike) computing. Note: A link has been removed,

Researchers from KTH Royal Institute of Technology [Sweden] and Stanford University [US] have fabricated a material for computer components that enable the commercial viability of computers that mimic the human brain (Advanced Functional Materials, “High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene”).

A January 31, 2022 KTH Royal Institute of Technology press release (also on EurekAlert but published January 28, 2022), which originated the news item, delves further into the research,

Electrochemical random access (ECRAM) memory components made with 2D titanium carbide showed outstanding potential for complementing classical transistor technology, and contributing toward commercialization of powerful computers that are modeled after the brain’s neural network. Such neuromorphic computers can be thousands times more energy efficient than today’s computers.

These advances in computing are possible because of some fundamental differences from the classic computing architecture in use today, and the ECRAM, a component that acts as a sort of synaptic cell in an artificial neural network, says KTH Associate Professor Max Hamedi.

“Instead of transistors that are either on or off, and the need for information to be carried back and forth between the processor and memory—these new computers rely on components that can have multiple states, and perform in-memory computation,” Hamedi says.

The scientists at KTH and Stanford have focused on testing better materials for building an ECRAM, a component in which switching occurs by inserting ions into an oxidation channel, in a sense similar to our brain which also works with ions. What has been needed to make these chips commercially viable are materials that overcome the slow kinetics of metal oxides and the poor temperature stability of plastics.                   

The key material in the ECRAM units that the researchers fabricated is referred to as MXene—a two-dimensional (2D) compound, barely a few atoms thick, consisting of titanium carbide (Ti3C2Tx). The MXene combines the high speed of organic chemistry with the integration compatibility of inorganic materials in a single device operating at the nexus of electrochemistry and electronics, Hamedi says.

Co-author Professor Alberto Salleo at Stanford University, says that MXene ECRAMs combine the speed, linearity, write noise, switching energy, and endurance metrics essential for parallel acceleration of artificial neural networks.

“MXenes are an exciting materials family for this particular application as they combine the temperature stability needed for integration with conventional electronics with the availability of a vast composition space to optimize performance, Salleo says”

While there are many other barriers to overcome before consumers can buy their own neuromorphic computers, Hamedi says the 2D ECRAMs represent a breakthrough at least in the area of neuromorphic materials, potentially leading to artificial intelligence that can adapt to confusing input and nuance, the way the brain does with thousands time smaller energy consumption. This can also enable portable devices capable of much heavier computing tasks without having to rely on the cloud.

Here’s a link to and a citation for the paper,

High-Speed Ionic Synaptic Memory Based on 2D Titanium Carbide MXene by
Armantas Melianas, Min-A Kang, Armin VahidMohammadi, Tyler James Quill, Weiqian Tian, Yury Gogotsi, Alberto Salleo, Mahiar Max Hamedi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202109970 First published: 21 November 2021

This paper is open access.

A graphene-inorganic-hybrid micro-supercapacitor made of fallen leaves

I wonder if this means the end to leaf blowers. That is almost certainly wishful thinking as the researchers don’t seem to be concerned with how the leaves are gathered.

The schematic illustration of the production of femtosecond laser-induced graphene. Courtesy of KAIST

A January 27, 2022 news item on Nanowerk announces the work (Note: A link has been removed),

A KAIST [Korea Advanced Institute of Science and Technology] research team has developed graphene-inorganic-hybrid micro-supercapacitors made of fallen leaves using femtosecond laser direct laser writing (Advanced Functional Materials, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses”).

A January 27, 2022 KAIST press release (also on EurekAlert but published January 26, 2022), which originated the news item, delves further into the research,

The rapid development of wearable electronics requires breakthrough innovations in flexible energy storage devices in which micro-supercapacitors have drawn a great deal of interest due to their high power density, long lifetimes, and short charging times. Recently, there has been an enormous increase in waste batteries owing to the growing demand and the shortened replacement cycle in consumer electronics. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges.

Forests cover about 30 percent of the Earth’s surface and produce a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is completely biodegradable, which makes it an attractive sustainable resource. Nevertheless, if the fallen leaves are left neglected instead of being used efficiently, they can contribute to fire hazards, air pollution, and global warming.

To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a novel technology that can create 3D porous graphene microelectrodes with high electrical conductivity by irradiating femtosecond laser pulses on the leaves in ambient air. This one-step fabrication does not require any additional materials or pre-treatment. 

They showed that this technique could quickly and easily produce porous graphene electrodes at a low price, and demonstrated potential applications by fabricating graphene micro-supercapacitors to power an LED and an electronic watch. These results open up a new possibility for the mass production of flexible and green graphene-based electronic devices.

Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.” 

This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research.

Here’s a link to and a citation for the paper,

Green Flexible Graphene–Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses by Truong-Son Dinh Le, Yeong A. Lee, Han Ku Nam, Kyu Yeon Jang, Dongwook Yang, Byunggi Kim, Kanghoon Yim, Seung-Woo Kim, Hana Yoon, Young-Jin Kim. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202107768 First published: 05 December 2021

This paper is behind a paywall.