Tag Archives: neuromorphic computing

Dynamic magnetic fractal networks for neuromorphic (brainlike) computing

Credit: Advanced Materials (2023). DOI: 10.1002/adma.202300416 [cover image]

This is a different approach to neuromorphic (brainlike) computing being described in an August 28, 2023 news item on phys.org, Note: A link has been removed,

The word “fractals” might inspire images of psychedelic colors spiraling into infinity in a computer animation. An invisible, but powerful and useful, version of this phenomenon exists in the realm of dynamic magnetic fractal networks.

Dustin Gilbert, assistant professor in the Department of Materials Science and Engineering [University of Tennessee, US], and colleagues have published new findings in the behavior of these networks—observations that could advance neuromorphic computing capabilities.

Their research is detailed in their article “Skyrmion-Excited Spin-Wave Fractal Networks,” cover story for the August 17, 2023, issue of Advanced Materials.

An August 18, 2023 University of Tennessee news release, which originated the news item, provides more details,

“Most magnetic materials—like in refrigerator magnets—are just comprised of domains where the magnetic spins all orient parallel,” said Gilbert. “Almost 15 years ago, a German research group discovered these special magnets where the spins make loops—like a nanoscale magnetic lasso. These are called skyrmions.”

Named for legendary particle physicist Tony Skyrme, a skyrmion’s magnetic swirl gives it a non-trivial topology. As a result of this topology, the skyrmion has particle-like properties—they are hard to create or destroy, they can move and even bounce off of each other. The skyrmion also has dynamic modes—they can wiggle, shake, stretch, whirl, and breath[e].

As the skyrmions “jump and jive,” they are creating magnetic spin waves with a very narrow wavelength. The interactions of these waves form an unexpected fractal structure.

“Just like a person dancing in a pool of water, they generate waves which ripple outward,” said Gilbert. “Many people dancing make many waves, which normally would seem like a turbulent, chaotic sea. We measured these waves and showed that they have a well-defined structure and collectively form a fractal which changes trillions of times per second.”

Fractals are important and interesting because they are inherently tied to a “chaos effect”—small changes in initial conditions lead to big changes in the fractal network.

“Where we want to go with this is that if you have a skyrmion lattice and you illuminate it with spin waves, the way the waves make its way through this fractal-generating structure is going to depend very intimately on its construction,” said Gilbert. “So, if you could write individual skyrmions, it can effectively process incoming spin waves into something on the backside—and it’s programmable. It’s a neuromorphic architecture.”

The Advanced Materials cover illustration [image at top of this posting] depicts a visual representation of this process, with the skyrmions floating on top of a turbulent blue sea illustrative of the chaotic structure generated by the spin wave fractal.

“Those waves interfere just like if you throw a handful of pebbles into a pond,” said Gilbert. “You get a choppy, turbulent mess. But it’s not just any simple mess, it’s actually a fractal. We have an experiment now showing that the spin waves generated by skyrmions aren’t just a mess of waves, they have inherent structure of their very own. By, essentially, controlling those stones that we ‘throw in,’ you get very different patterns, and that’s what we’re driving towards.”

The discovery was made in part by neutron scattering experiments at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor and at the National Institute of Standards and Technology (NIST) Center for Neutron Research. Neutrons are magnetic and pass through materials easily, making them ideal probes for studying materials with complex magnetic behavior such as skyrmions and other quantum phenomena.

Gilbert’s co-authors for the new article are Nan Tang, Namila Liyanage, and Liz Quigley, students in his research group; Alex Grutter and Julie Borchers from National Institute of Standards and Technology (NIST), Lisa DeBeer-Schmidt and Mike Fitzsimmons from Oak Ridge National Laboratory; and Eric Fullerton, Sheena Patel, and Sergio Montoya from the University of California, San Diego.

The team’s next step is to build a working model using the skyrmion behavior.

“If we can develop thinking computers, that, of course, is extraordinarily important,” said Gilbert. “So, we will propose to make a miniaturized, spin wave neuromorphic architecture.” He also hopes that the ripples from this UT Knoxville discovery inspire researchers to explore uses for a spiraling range of future applications.

Here’s a link to and a citation for the paper,

Skyrmion-Excited Spin-Wave Fractal Networks by Nan Tang, W. L. N. C. Liyanage, Sergio A. Montoya, Sheena Patel, Lizabeth J. Quigley, Alexander J. Grutter, Michael R. Fitzsimmons, Sunil Sinha, Julie A. Borchers, Eric E. Fullerton, Lisa DeBeer-Schmitt, Dustin A. Gilbert. Advanced Materials Volume 35, Issue 33 August 17, 2023 2300416 DOI: https://doi.org/10.1002/adma.202300416 First published: 04 May 2023

This paper is behind a paywall.

IBM’s neuromorphic chip, a prototype and more

it seems IBM is very excited about neuromorphic computing. First, there’s an August 10, 2023 news article by Shiona McCallum & Chris Vallance for British Broadcasting Corporation (BBC) online news,

Concerns have been raised about emissions associated with warehouses full of computers powering AI systems.

IBM said its prototype could lead to more efficient, less battery draining AI chips for smartphones.

Its efficiency is down to components that work in a similar way to connections in human brains, it said.

Compared to traditional computers, “the human brain is able to achieve remarkable performance while consuming little power”, said scientist Thanos Vasilopoulos, based at IBM’s research lab in Zurich, Switzerland.

I sense a memristor about to be mentioned, from McCallum & Vallance’s article August 10, 2023 news article,

Most chips are digital, meaning they store information as 0s and 1s, but the new chip uses components called memristors [memory resistors] that are analogue and can store a range of numbers.

You can think of the difference between digital and analogue as like the difference between a light switch and a dimmer switch.

The human brain is analogue, and the way memristors work is similar to the way synapses in the brain work.

Prof Ferrante Neri, from the University of Surrey, explains that memristors fall into the realm of what you might call nature-inspired computing that mimics brain function.

A memristor could “remember” its electric history, in a similar way to a synapse in a biological system.

“Interconnected memristors can form a network resembling a biological brain,” he said.

He was cautiously optimistic about the future for chips using this technology: “These advancements suggest that we may be on the cusp of witnessing the emergence of brain-like chips in the near future.”

However, he warned that developing a memristor-based computer is not a simple task and that there would be a number of challenges ahead for widespread adoption, including the costs of materials and manufacturing difficulties.

Neri is most likely aware that researchers have been excited that ‘green’ computing could be made possible by memristors since at least 2008 (see my May 9, 2008 posting “Memristors and green energy“).

As it turns out, IBM published two studies on neuromorphic chips in August 2023.

The first study (mentioned in the BBC article) is also described in an August 22, 2023 article by Peter Grad for Tech Xpore. This one is a little more technical than the BBC article,

For those who are truly technical, here’s a link to and a citation for the paper,

A 64-core mixed-signal in-memory compute chip based on phase-change memory for deep neural network inference by Manuel Le Gallo, Riduan Khaddam-Aljameh, Milos Stanisavljevic, Athanasios Vasilopoulos, Benedikt Kersting, Martino Dazzi, Geethan Karunaratne, Matthias Brändli, Abhairaj Singh, Silvia M. Müller, Julian Büchel, Xavier Timoneda, Vinay Joshi, Malte J. Rasch, Urs Egger, Angelo Garofalo, Anastasios Petropoulos, Theodore Antonakopoulos, Kevin Brew, Samuel Choi, Injo Ok, Timothy Philip, Victor Chan, Claire Silvestre, Ishtiaq Ahsan, Nicole Saulnier, Nicole Saulnier, Pier Andrea Francese, Evangelos Eleftheriou & Abu Sebastian. Nature Electronics (2023) DOI: https://doi.org/10.1038/s41928-023-01010-1 Published: 10 August 2023

This paper is behind a paywall.

Before getting to the second paper, there’s an August 23, 2023 IBM blog post by Mike Murphy announcing its publication in Nature, Note: Links have been removed,

Although we’re still just at the precipice of the AI revolution, artificial intelligence has already begun to revolutionize the way we live and work. There’s just one problem: AI technology is incredibly power-hungry. By some estimates, running a large AI model generates more emissions over its lifetime than the average American car.

The future of AI requires new innovations in energy efficiency, from the way models are designed down to the hardware that runs them. And in a world that’s increasingly threatened by climate change, any advances in AI energy efficiency are essential to keep pace with AI’s rapidly expanding carbon footprint.

And one of the latest breakthroughs in AI efficiency from IBM Research relies on analog chips — ones that consume much less power. In a paper published in Nature today,1 researchers from IBM labs around the world presented their prototype analog AI chip for energy-efficient speech recognition and transcription. Their design was utilized in two AI inference experiments, and in both cases, the analog chips performed these tasks just as reliably as comparable all-digital devices — but finished the tasks faster and used less energy.

The concept of designing analog chips for AI inference is not new — researchers have been contemplating the idea for years. Back in 2021, a team at IBM developed chips that use Phase-change memory (PCM) works when an electrical pulse is applied to a material, which changes the conductance of the device. The material switches between amorphous and crystalline phases, where a lower electrical pulse will make the device more crystalline, providing less resistance, and a high enough electrical pulse makes the device amorphous, resulting in large resistance. Instead of recording the usual 0s or 1s you would see in digital systems, the PCM device records its state as a continuum of values between the amorphous and crystalline states. This value is called a synaptic weight, which can be stored in the physical atomic configuration of each PCM device. The memory is non-volatile, so the weights are retained when the power supply is switched off.phase-change memory to encode the weights of a neural network directly onto the physical chip. But previous research in the field hasn’t shown how chips like these could be used on the massive models we see dominating the AI landscape today. For example, GPT-3, one of the larger popular models, has 175 billion parameters, or weights.

Murphy also explains the difference (for amateurs like me) between this work and the earlier published study, from the August 23, 2023 IBM blog post, Note: Links have been removed,

Natural-language tasks aren’t the only AI problems that analog AI could solve — IBM researchers are working on a host of other uses. In a paper published earlier this month in Nature Electronics, the team showed it was possible to use an energy-efficient analog chip design for scalable mixed-signal architecture that can achieve high accuracy in the CIFAR-10 image dataset for computer vision image recognition.

These chips were conceived and designed by IBM researchers in the Tokyo, Zurich, Yorktown Heights, New York, and Almaden, California labs, and built by an external fabrication company. The phase change memory and metal levels were processed and validated at IBM Research’s lab in the Albany Nanotech Complex.

If you were to combine the benefits of the work published today in Nature, such as large arrays and parallel data-transport, with the capable digital compute-blocks of the chip shown in the Nature Electronics paper, you would see many of the building blocks needed to realize the vision of a fast, low-power analog AI inference accelerator. And pairing these designs with hardware-resilient training algorithms, the team expects these AI devices to deliver the software equivalent of neural network accuracies for a wide range of AI models in the future.

Here’s a link to and a citation for the second paper,

An analog-AI chip for energy-efficient speech recognition and transcription by S. Ambrogio, P. Narayanan, A. Okazaki, A. Fasoli, C. Mackin, K. Hosokawa, A. Nomura, T. Yasuda, A. Chen, A. Friz, M. Ishii, J. Luquin, Y. Kohda, N. Saulnier, K. Brew, S. Choi, I. Ok, T. Philip, V. Chan, C. Silvestre, I. Ahsan, V. Narayanan, H. Tsai & G. W. Burr. Nature volume 620, pages 768–775 (2023) DOI: https://doi.org/10.1038/s41586-023-06337-5 Published: 23 August 2023 Issue Date: 24 August 2023

This paper is open access.

Neuromorphic transistor with electric double layer

it may be my imagination but it seems as if neuromorphic (brainlike) engineering research has really taken off in the last few years and, even with my lazy approach to finding articles, I’m having trouble keeping up.

This latest work comes from Japan according to an August 4, 2023 news item on Nanowerk, Note: A link has been removed,

A research team consisting of NIMS [National Institute for Materials Science] and the Tokyo University of Science has developed the fastest electric double layer transistor using a highly ion conductive ceramic thin film and a diamond thin film. This transistor may be used to develop energy-efficient, high-speed edge AI devices with a wide range of applications, including future event prediction and pattern recognition/determination in images (including facial recognition), voices and odors.

The research was published in Materials Today Advances (“Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing”).

A July 7, 2023 National Institute for Materials Science press release (also on EurekAlert but published August 3, 2023), which originated the news item, is arranged as a numbered list of points, the first point being the first paragraph in the news release/item,

2. An electric double layer transistor works as a switch using electrical resistance changes caused by the charge and discharge of an electric double layer formed at the interface between the electrolyte and semiconductor. Because this transistor is able to mimic the electrical response of human cerebral neurons (i.e., acting as a neuromorphic transistor), its use in AI devices is potentially promising. However, existing electric double layer transistors are slow in switching between on and off states. The typical transition time ranges from several hundreds of microseconds to 10 milliseconds. Development of faster electric double layer transistors is therefore desirable.

3. This research team developed an electric double layer transistor by depositing ceramic (yttria-stabilized porous zirconia thin film) and diamond thin films with a high degree of precision using a pulsed laser, forming an electric double layer at the ceramic/diamond interface. The zirconia thin film is able to adsorb large amounts of water into its nanopores and allow hydrogen ions from the water to readily migrate through it, enabling the electric double layer to be rapidly charged and discharged. This electric double layer effect enables the transistor to operate very quickly. The team actually measured the speed at which the transistor operates by applying pulsed voltage to it and found that it operates 8.5 times faster than existing electric double layer transistors, setting a new world record. The team also confirmed the ability of this transistor to convert input waveforms into many different output waveforms with precision—a prerequisite for transistors to be compatible with neuromorphic AI devices.

4. This research project produced a new ceramic thin film technology capable of rapidly charging and discharging an electric double layer several nanometers in thickness. This is a major achievement in efforts to create practical, high-speed, energy-efficient AI-assisted devices. These devices, in combination with various sensors (e.g., smart watches, surveillance cameras and audio sensors), are expected to offer useful tools in various industries, including medicine, disaster prevention, manufacturing and security.

Here’s a link to and a citation for the paper,

Ultrafast-switching of an all-solid-state electric double layer transistor with a porous yttria-stabilized zirconia proton conductor and the application to neuromorphic computing by Makoto Takayanagi, Daiki Nishioka, Takashi Tsuchiya, Masataka Imura, Yasuo Koide, Tohru Higuchi, and Kazuya Terabe. Materials Today Advances [June 16, 2023]; DOI : 10.1016/j.mtadv.2023.10039

This paper is open access.

10 years of the European Union’s roll of the dice: €1B or 1billion euros each for the Human Brain Project (HBP) and the Graphene Flagship

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Human Brain Project (HBP)

A September 12, 2023 Human Brain Project (HBP) press release (also on EurekAlert) summarizes the ten year research effort and the achievements,

The EU-funded Human Brain Project (HBP) comes to an end in September and celebrates its successful conclusion today with a scientific symposium at Forschungszentrum Jülich (FZJ). The HBP was one of the first flagship projects and, with 155 cooperating institutions from 19 countries and a total budget of 607 million euros, one of the largest research projects in Europe. Forschungszentrum Jülich, with its world-leading brain research institute and the Jülich Supercomputing Centre, played an important role in the ten-year project.

“Understanding the complexity of the human brain and explaining its functionality are major challenges of brain research today”, says Astrid Lambrecht, Chair of the Board of Directors of Forschungszentrum Jülich. “The instruments of brain research have developed considerably in the last ten years. The Human Brain Project has been instrumental in driving this development – and not only gained new insights for brain research, but also provided important impulses for information technologies.”

HBP researchers have employed highly advanced methods from computing, neuroinformatics and artificial intelligence in a truly integrative approach to understanding the brain as a multi-level system. The project has contributed to a deeper understanding of the complex structure and function of the brain and enabled novel applications in medicine and technological advances.

Among the project’s highlight achievements are a three-dimensional, digital atlas of the human brain with unprecedented detail, personalised virtual models of patient brains with conditions like epilepsy and Parkinson’s, breakthroughs in the field of artificial intelligence, and an open digital research infrastructure – EBRAINS – that will remain an invaluable resource for the entire neuroscience community beyond the end of the HBP.

Researchers at the HBP have presented scientific results in over 3000 publications, as well as advanced medical and technical applications and over 160 freely accessible digital tools for neuroscience research.

“The Human Brain Project has a pioneering role for digital brain research with a unique interdisciplinary approach at the interface of neuroscience, computing and technology,” says Katrin Amunts, Director of the HBP and of the Institute for Neuroscience and Medicine at FZJ. “EBRAINS will continue to power this new way of investigating the brain and foster developments in brain medicine.”

“The impact of what you achieved in digital science goes beyond the neuroscientific community”, said Gustav Kalbe, CNECT, Acting Director of Digital Excellence and Science Infrastructures at the European Commission during the opening of the event. “The infrastructure that the Human Brain Project has established is already seen as a key building block to facilitate cooperation and research across geographical boundaries, but also across communities.”

Further information about the Human Brain Project as well as photos from research can be found here: https://fz-juelich.sciebo.de/s/hWJkNCC1Hi1PdQ5.

Results highlights and event photos in the online press release.

Results overviews:
– “Human Brain Project: Spotlights on major achievements” and “A closer Look on Scientific
Advances”

– “Human Brain Project: An extensive guide to the tools developed”

Examples of results from the Human Brain Project:

As the “Google Maps of the brain” [emphasis mine], the Human Brain Project makes the most comprehensive digital brain atlas to date available to all researchers worldwide. The atlas by Jülich researchers and collaborators combines high-resolution data of neurons, fibre connections, receptors and functional specialisations in the brain, and is designed as a constantly growing system.

13 hospitals in France are currently testing the new “Virtual Epileptic Patient” – a platform developed at the University of Marseille [Aix-Marseille University?] in the Human Brain Project. It creates personalised simulation models of brain dynamics to provide surgeons with predictions for the success of different surgical treatment strategies. The approach was presented this year in the journals Science Translational Medicine and The Lancet Neurology.



SpiNNaker2 is a “neuromorphic” [brainlike] computer developed by the University of Manchester and TU Dresden within the Human Brain Project. The company SpiNNcloud Systems in Dresden is commercialising the approach for AI applications. (Image: Sprind.org)

As an openly accessible digital infrastructure, EBRAINS offers scientists easy access to the best techniques for complex research questions.

[https://www.ebrains.eu/]

There was a Canadian connection at one time; Montréal Neuro at Canada’s McGill University was involved in developing a computational platform for neuroscience (CBRAIN) for HBP according to an announcement in my January 29, 2013 posting. However, there’s no mention of the EU project on the CBRAIN website nor is there mention of a Canadian partner on the EBRAINS website, which seemed the most likely successor to the CBRAIN portion of the HBP project originally mentioned in 2013.

I couldn’t resist “Google maps of the brain.”

In any event, the statement from Astrid Lambrecht offers an interesting contrast to that offered by the leader of the other project.

Graphene Flagship

In fact, the Graphene Flagship has been celebrating its 10th anniversary since last year; see my September 1, 2022 posting titled “Graphene Week (September 5 – 9, 2022) is a celebration of 10 years of the Graphene Flagship.”

The flagship’s lead institution, Chalmers University of Technology in Sweden, issued an August 28, 2023 press release by Lisa Gahnertz (also on the Graphene Flagship website but published September 4, 2023) touting its achievement with an ebullience I am more accustomed to seeing in US news releases,

Chalmers steers Europe’s major graphene venture to success

For the past decade, the Graphene Flagship, the EU’s largest ever research programme, has been coordinated from Chalmers with Jari Kinaret at the helm. As the project reaches the ten-year mark, expectations have been realised, a strong European research field on graphene has been established, and the journey will continue.

‘Have we delivered what we promised?’ asks Graphene Flagship Director Jari Kinaret from his office in the physics department at Chalmers, overlooking the skyline of central Gothenburg.

‘Yes, we have delivered more than anyone had a right to expect,’ [emphasis mine] he says. ‘In our analysis for the conclusion of the project, we read the documents that were written at the start. What we promised then were over a hundred specific things. Some of them were scientific and technological promises, and they have all been fulfilled. Others were for specific applications, and here 60–70 per cent of what was promised has been delivered. We have also delivered applications we did not promise from the start, but these are more difficult to quantify.’

The autumn of 2013 saw the launch of the massive ten-year Science, Technology and Innovation research programme on graphene and other related two-dimensional materials. Joint funding from the European Commission and EU Member States totalled a staggering €1,000 million. A decade later, it is clear that the large-scale initiative has succeeded in its endeavours. According to a report by the research institute WifOR, the Graphene Flagship will have created a total contribution to GDP of €3,800 million and 38,400 new jobs in the 27 EU countries between 2014 and 2030.

Exceeded expectations

‘Per euro invested and compared to other EU projects, the flagship has performed 13 times better than expected in terms of patent applications, and seven times better for scientific publications. We have 17 spin-off companies that have received over €130 million in private funding – people investing their own money is a real example of trust in the fact that the technology works,’ says Jari Kinaret.

He emphasises that the long time span has been crucial in developing the concepts of the various flagship projects.

‘When it comes to new projects, the ability to work on a long timescale is a must and is more important than a large budget. It takes a long time to build trust, both in one another within a team and in the technology on the part of investors, industry and the wider community. The size of the project has also been significant. There has been an ecosystem around the material, with many graphene manufacturers and other organisations involved. It builds robustness, which means you have the courage to invest in the material and develop it.’

From lab to application

In 2010, Andre Geim and Konstantin Novoselov of the University of Manchester won the Nobel Prize in Physics for their pioneering experiments isolating the ultra-light and ultra-thin material graphene. It was the first known 2D material and stunned the world with its ‘exceptional properties originating in the strange world of quantum physics’ according to the Nobel Foundation’s press release. Many potential applications were identified for this electrically conductive, heat-resistant and light-transmitting material. Jari Kinaret’s research team had been exploring the material since 2006, and when Kinaret learned of the European Commission’s call for a ten-year research programme, it prompted him to submit an application. The Graphene Flagship was initiated to ensure that Europe would maintain its leading position in graphene research and innovation, and its coordination and administration fell to Chalmers.

Is it a staggering thought that your initiative became the biggest EU research project of all time?

‘The fact that the three-minute presentation I gave at a meeting in Brussels has grown into an activity in 22 countries, with 170 organisations and 1,300 people involved … You can’t think about things like that because it can easily become overwhelming. Sometimes you just have to go for it,’ says Jari Kinaret.

One of the objectives of the Graphene Flagship was to take the hopes for this material and move them from lab to application. What has happened so far?

‘We are well on track with 100 products priced and on their way to the market. Many of them are business-to-business products that are not something we ordinary consumers are going to buy, but which may affect us indirectly.’

‘It’s important to remember that getting products to the application stage is a complex process. For a researcher, it may take ten working prototypes; for industry, ten million. Everything has to click into place, on a large scale. All components must work identically and in exactly the same way, and be compatible with existing production in manufacturing as you cannot rebuild an entire factory for a new material. In short, it requires reliability, reproducibility and manufacturability.’

Applications in a wide range of areas

Graphene’s extraordinary properties are being used to deliver the next generation of technologies in a wide range of fields, such as sensors for self-driving cars, advanced batteries, new water purification methods and sophisticated instruments for use in neuroscience. When asked if there are any applications that Jani Kinaret himself would like to highlight, he mentions, among other things, the applications that are underway in the automotive industry – such as sensors to detect obstacles for self-driving cars. Thanks to graphene, they will be so cost-effective to produce that it will be possible to make them available in more than just the most expensive car models.

He also highlights the aerospace industry, where a graphene material for removing ice from aircraft and helicopter wings is under development for the Airbus company. Another favourite, which he has followed from basic research to application, is the development of an air cleaner for Lufthansa passenger aircraft, based on a kind of ‘graphene foam’. Because graphene foam is very light, it can be heated extremely quickly. A pulse of electricity lasting one thousandth of a second is enough to raise the temperature to 300 degrees, thus killing micro-organisms and effectively cleaning the air in the aircraft.

He also mentions the Swedish company ABB, which has developed a graphene composite for circuit breakers in switchgear. These circuit breakers are used to protect the electricity network and must be safe to use. The graphene composite replaces the manual lubrication of the circuit breakers, resulting in significant cost savings.

‘We also see graphene being used in medical technology, but its application requires many years of testing and approval by various bodies. For example, graphene technology can more effectively map the brain before neurosurgery, as it provides a more detailed image. Another aspect of graphene is that it is soft and pliable. This means it can be used for electrodes that are implanted in the brain to treat tremors in Parkinson’s patients, without the electrodes causing scarring,’ says Jari Kinaret.

Coordinated by Chalmers

Jari Kinaret sees the fact that the EU chose Chalmers as the coordinating university as a favourable factor for the Graphene Flagship.

‘Hundreds of millions of SEK [Swedish Kroner] have gone into Chalmers research, but what has perhaps been more important is that we have become well-known and visible in certain areas. We also have the 2D-Tech competence centre and the SIO Grafen programme, both funded by Vinnova and coordinated by Chalmers and Chalmers industriteknik respectively. I think it is excellent that Chalmers was selected, as there could have been too much focus on the coordinating organisation if it had been more firmly established in graphene research at the outset.’

What challenges have been encountered during the project?

‘With so many stakeholders involved, we are not always in agreement. But that is a good thing. A management book I once read said that if two parties always agree, then one is redundant. At the start of the project, it was also interesting to see the major cultural differences we had in our communications and that different cultures read different things between the lines; it took time to realise that we should be brutally straightforward in our communications with one another.’

What has it been like to have the coordinating role that you have had?

‘Obviously, I’ve had to worry about things an ordinary physics professor doesn’t have to worry about, like a phone call at four in the morning after the Brexit vote or helping various parties with intellectual property rights. I have read more legal contracts than I thought I would ever have to read as a professor. As a researcher, your approach when you go into a role is narrow and deep, here it was rather all about breadth. I would have liked to have both, but there are only 26 hours in a day,’ jokes Jari Kinaret.

New phase for the project and EU jobs to come

A new assignment now awaits Jari Kinaret outside Chalmers as Chief Executive Officer of the EU initiative KDT JU (Key Digital Technologies Joint Undertaking, soon to become Chips JU), where industry and the public sector interact to drive the development of new electronic components and systems.

The Graphene Flagship may have reached its destination in its current form, but the work started is progressing in a form more akin to a flotilla. About a dozen projects will continue to live on under the auspices of the European Commission’s Horizon Europe programme. Chalmers is going to coordinate a smaller CSA project called GrapheneEU, where CSA stands for ‘Coordination and Support Action’. It will act as a cohesive force between the research and innovation projects that make up the next phase of the flagship, offering them a range of support and services, including communication, innovation and standardisation.

The Graphene Flagship is about to turn ten. If the project had been a ten-year-old child, what kind of child would it have been?

‘It would have been a very diverse organism. Different aspirations are beginning to emerge – perhaps it is adolescence that is approaching. In addition, within the project we have also studied other related 2D materials, and we found that there are 6,000 distinct materials of this type, of which only about 100 have been studied. So, it’s the younger siblings that are starting to arrive now.’

Facts about the Graphene Flagship:

The Graphene Flagship is the first European flagship for future and emerging technologies. It has been coordinated and administered from the Department of Physics at Chalmers, and as the project enters its next phase, GrapheneEU, coordination will continue to be carried out by staff currently working on the flagship led by Chalmers Professor Patrik Johansson.

The project has proved highly successful in developing graphene-based technology in Europe, resulting in 17 new companies, around 100 new products, nearly 500 patent applications and thousands of scientific papers. All in all, the project has exceeded the EU’s targets for utilisation from research projects by a factor of ten. According to the assessment of the EU research programme Horizon 2020, Chalmers’ coordination of the flagship has been identified as one of the key factors behind its success.

Graphene Week will be held at the Svenska Mässan in Gothenburg from 4 to 8 September 2023. Graphene Week is an international conference, which also marks the finale of the ten-year anniversary of the Graphene Flagship. The conference will be jointly led by academia and industry – Professor Patrik Johansson from Chalmers and Dr Anna Andersson from ABB – and is expected to attract over 400 researchers from Sweden, Europe and the rest of the world. The programme includes an exhibition, press conference and media activities, special sessions on innovation, diversity and ethics, and several technical sessions. The full programme is available here.

Read the press release on Graphene Week from 4 to 8 September and the overall results of the Graphene Flagship. …

Ten years and €1B each. Congratulations to the organizers on such massive undertakings. As for whether or not (and how they’ve been successful), I imagine time will tell.

Optical memristors and neuromorphic computing

A June 5, 2023 news item on Nanowerk announced a paper which reviews the state-of-the-art of optical memristors, Note: Links have been removed,

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system – both hardware and software combined – has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics (“Integrated Optical Memristors”), sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential. Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

A June 2, 2023 University of Pittsburgh news release (also on EurekAlert but published June 5, 2023), which originated the news item, provides more detail,

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.” 

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address. 

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood. 

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Using Light to Revolutionize Computing

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing. 

Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence. 

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

The publication of “Integrated Optical Memristors” (DOI: 10.1038/s41566-023-01217-w) was published in Nature Photonics and is coauthored by senior author Harish Bhaskaran at the University of Oxford, Wolfram Pernice at Heidelberg University, and Carlos Ríos at the University of Maryland.

Despite including that final paragraph, I’m also providing a link to and a citation for the paper,

Integrated optical memristors by Nathan Youngblood, Carlos A. Ríos Ocampo, Wolfram H. P. Pernice & Harish Bhaskaran. Nature Photonics volume 17, pages 561–572 (2023) DOI: https://doi.org/10.1038/s41566-023-01217-w Published online: 29 May 2023 Issue Date: July 2023

This paper is behind a paywall.

Memristors based on halide perovskite nanocrystals are more powerful and easier to manufacture

A March 8, 2023 news item on phys.org announces research from Swiss and Italian researchers into a new type of memristor,

Researchers at Empa, ETH Zurich and the Politecnico di Milano are developing a new type of computer component that is more powerful and easier to manufacture than its predecessors. Inspired by the human brain, it is designed to process large amounts of data fast and in an energy-efficient way.

In many respects, the human brain is still superior to modern computers. Although most people can’t do math as fast as a computer, we can effortlessly process complex sensory information and learn from experiences, while a computer cannot – at least not yet. And, the brain does all this by consuming less than half as much energy as a laptop.

One of the reasons for the brain’s energy efficiency is its structure. The individual brain cells – the neurons and their connections, the synapses – can both store and process information. In computers, however, the memory is separate from the processor, and data must be transported back and forth between these two components. The speed of this transfer is limited, which can slow down the whole computer when working with large amounts of data.

One possible solution to this bottleneck are novel computer architectures that are modeled on the human brain. To this end, scientists are developing so-called memristors: components that, like brain cells, combine data storage and processing. A team of researchers from Empa, ETH Zurich and the “Politecnico di Milano” has now developed a memristor that is more powerful and easier to manufacture than its predecessors. The researchers have recently published their results in the journal Science Advances.

A March 8, 2023 Swiss Federal Laboratories for Materials Science and Technology (EMPA) press release (also on EurekAlert), which originated the news item, provides details about what makes this memristor different,

Performance through mixed ionic and electronic conductivity

The novel memristors are based on halide perovskite nanocrystals, a semiconductor material known from solar cell manufacturing. “Halide perovskites conduct both ions and electrons,” explains Rohit John, former ETH Fellow and postdoctoral researcher at both ETH Zurich and Empa. “This dual conductivity enables more complex calculations that closely resemble processes in the brain.”

The researchers conducted the experimental part of the study entirely at Empa: They manufactured the thin-film memristors at the Thin Films and Photovoltaics laboratory and investigated their physical properties at the Transport at Nanoscale Interfaces laboratory. Based on the measurement results, they then simulated a complex computational task that corresponds to a learning process in the visual cortex in the brain. The task involved determining the orientation of light based on signals from the retina.

“As far as we know, this is only the second time this kind of computation has been performed on memristors,” says Maksym Kovalenko, professor at ETH Zurich and head of the Functional Inorganic Materials research group at Empa. “At the same time, our memristors are much easier to manufacture than before.” This is because, in contrast to many other semiconductors, perovskites crystallize at low temperatures. In addition, the new memristors do not require the complex preconditioning through application of specific voltages that comparable devices need for such computing tasks. This makes them faster and more energy-efficient.

Complementing rather than replacing

The technology, though, is not quite ready for deployment yet. The ease with which the new memristors can be manufactured also makes them difficult to integrate with existing computer chips: Perovskites cannot withstand temperatures of 400 to 500 degrees Celsius that are needed to process silicon – at least not yet. But according to Daniele Ielmini, professor at the “Politecnico di Milano”, that integration is key to the success for new brain-like computer technologies. “Our goal is not to replace classical computer architecture,” he explains. “Rather, we want to develop alternative architectures that can perform certain tasks faster and with greater energy efficiency. This includes, for example, the parallel processing of large amounts of data, which is generated everywhere today, from agriculture to space exploration.”

Promisingly, there are other materials with similar properties that could be used to make high-performance memristors. “We can now test our memristor design with different materials,” says Alessandro Milozzi, a doctoral student at the “Politecnico di Milano”. “It is quite possible that some of them are better suited for integration with silicon.”

Here’s a link to and a citation for the paper,

Ionic-electronic halide perovskite memdiodes enabling neuromorphic computing with a second-order complexity by Rohit Abraham John, Alessandro Milozzi, Sergey Tsarev, Rolf Brönnimann, Simon C. Boehme, Erfu Wu, Ivan Shorubalko, Maksym V. Kovalenko, and Daniele Ielmini. Science Advances 23 Dec 2022 Vol 8, Issue 51 DOI: 10.1126/sciadv.ade0072

This paper is open access.

Neuromorphic engineering: an overview

In a February 13, 2023 essay, Michael Berger who runs the Nanowerk website provides an overview of brainlike (neuromorphic) engineering.

This essay is the most extensive piece I’ve seen on Berger’s website and it covers everything from the reasons why scientists are so interested in mimicking the human brain to specifics about memristors. Here are a few excerpts (Note: Links have been removed),

Neuromorphic engineering is a cutting-edge field that focuses on developing computer hardware and software systems inspired by the structure, function, and behavior of the human brain. The ultimate goal is to create computing systems that are significantly more energy-efficient, scalable, and adaptive than conventional computer systems, capable of solving complex problems in a manner reminiscent of the brain’s approach.

This interdisciplinary field draws upon expertise from various domains, including neuroscience, computer science, electronics, nanotechnology, and materials science. Neuromorphic engineers strive to develop computer chips and systems incorporating artificial neurons and synapses, designed to process information in a parallel and distributed manner, akin to the brain’s functionality.

Key challenges in neuromorphic engineering encompass developing algorithms and hardware capable of performing intricate computations with minimal energy consumption, creating systems that can learn and adapt over time, and devising methods to control the behavior of artificial neurons and synapses in real-time.

Neuromorphic engineering has numerous applications in diverse areas such as robotics, computer vision, speech recognition, and artificial intelligence. The aspiration is that brain-like computing systems will give rise to machines better equipped to tackle complex and uncertain tasks, which currently remain beyond the reach of conventional computers.

It is essential to distinguish between neuromorphic engineering and neuromorphic computing, two related but distinct concepts. Neuromorphic computing represents a specific application of neuromorphic engineering, involving the utilization of hardware and software systems designed to process information in a manner akin to human brain function.

One of the major obstacles in creating brain-inspired computing systems is the vast complexity of the human brain. Unlike traditional computers, the brain operates as a nonlinear dynamic system that can handle massive amounts of data through various input channels, filter information, store key information in short- and long-term memory, learn by analyzing incoming and stored data, make decisions in a constantly changing environment, and do all of this while consuming very little power.

The Human Brain Project [emphasis mine], a large-scale research project launched in 2013, aims to create a comprehensive, detailed, and biologically realistic simulation of the human brain, known as the Virtual Brain. One of the goals of the project is to develop new brain-inspired computing technologies, such as neuromorphic computing.

The Human Brain Project has been funded by the European Union (1B Euros over 10 years starting in 2013 and sunsetting in 2023). From the Human Brain Project Media Invite,

The final Human Brain Project Summit 2023 will take place in Marseille, France, from March 28-31, 2023.

As the ten-year European Flagship Human Brain Project (HBP) approaches its conclusion in September 2023, the final HBP Summit will highlight the scientific achievements of the project at the interface of neuroscience and technology and the legacy that it will leave for the brain research community. …

One last excerpt from the essay,

Neuromorphic computing is a radical reimagining of computer architecture at the transistor level, modeled after the structure and function of biological neural networks in the brain. This computing paradigm aims to build electronic systems that attempt to emulate the distributed and parallel computation of the brain by combining processing and memory in the same physical location.

This is unlike traditional computing, which is based on von Neumann systems consisting of three different units: processing unit, I/O unit, and storage unit. This stored program architecture is a model for designing computers that uses a single memory to store both data and instructions, and a central processing unit to execute those instructions. This design, first proposed by mathematician and computer scientist John von Neumann, is widely used in modern computers and is considered to be the standard architecture for computer systems and relies on a clear distinction between memory and processing.

I found the diagram Berger Included with von Neumann’s design contrasted with a neuromorphic design illuminating,

A graphical comparison of the von Neumann and Neuromorphic architecture. Left: The von Neumann architecture used in traditional computers. The red lines depict the data communication bottleneck in the von Neumann architecture. Right: A graphical representation of a general neuromorphic architecture. In this architecture, the processing and memory is decentralized across different neuronal units(the yellow nodes) and synapses(the black lines connecting the nodes), creating a naturally parallel computing environment via the mesh-like structure. (Source: DOI: 10.1109/IS.2016.7737434) [downloaded from https://www.nanowerk.com/spotlight/spotid=62353.php]

Berger offers a very good overview and I recommend reading his February 13, 2023 essay on neuromorphic engineering with one proviso, Note: A link has been removed,

Many researchers in this field see memristors as a key device component for neuromorphic engineering. Memristor – or memory resistor – devices are non-volatile nanoelectronic memory devices that were first theorized [emphasis mine] by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated in 2008 by a group led by Stanley Williams [sometimes cited as R. Stanley Williams] at HP Research Labs.

Chua wasn’t the first as he, himself, has noted. Chua arrived at his theory independently in the 1970s but Bernard Widrow theorized what he called a ‘memistor’ in the 1960s. In fact “Memristors: they are older than you think” is a May 22, 2012 posting which featured an article “Two centuries of memristors” by Themistoklis Prodromakis, Christofer Toumazou and Leon Chua published in Nature Materials.

Most of us try to get it right but we don’t always succeed. It’s always good practice to read everyone (including me) with a little skepticism.

Combining silicon with metal oxide memristors to create powerful, low-energy intensive chips enabling AI in portable devices

In this one week, I’m publishing my first stories (see also June 13, 2023 posting “ChatGPT and a neuromorphic [brainlike] synapse“) where artificial intelligence (AI) software is combined with a memristor (hardware component) for brainlike (neuromorphic) computing.

Here’s more about some of the latest research from a March 30, 2023 news item on ScienceDaily,

Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC [University of Southern California] Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).

A March 29, 2023 University of Southern California (USC) news release (also on EurekAlert), which originated the news item, contextualizes the research and delves further into the topic of neuromorphic hardware,

For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience. 

Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices.  Yang’s work falls into the middle—focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation. 

Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT [Massachusetts Institute of Technology], and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors  (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it. 

The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems.  In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”

How it works: 

Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile.  Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone—but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy.  This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.

Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller.  Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.” 

To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. 

Here’s a link to and a citation for the paper,

Thousands of conductance levels in memristors integrated on CMOS by Mingyi Rao, Hao Tang, Jiangbin Wu, Wenhao Song, Max Zhang, Wenbo Yin, Ye Zhuo, Fatemeh Kiani, Benjamin Chen, Xiangqi Jiang, Hefei Liu, Hung-Yu Chen, Rivu Midya, Fan Ye, Hao Jiang, Zhongrui Wang, Mingche Wu, Miao Hu, Han Wang, Qiangfei Xia, Ning Ge, Ju Li & J. Joshua Yang. Nature volume 615, pages 823–829 (2023) DOI: https://doi.org/10.1038/s41586-023-05759-5 Issue Date: 30 March 2023 Published: 29 March 2023

This paper is behind a paywall.

ChatGPT and a neuromorphic (brainlike) synapse

I was teaching an introductory course about nanotechnology back in 2014 and, at the end of a session, stated (more or less) that the full potential for artificial intelligence (software) wasn’t going to be perceived until the hardware (memistors) was part of the package. (It’s interesting to revisit that in light of the recent uproar around AI (covered in my May 25, 2023 posting, which offered a survey of the situation.)

One of the major problems with artificial intelligence is its memory. The other is energy consumption. Both problems could be addressed by the integration of memristors into the hardware, giving rise to neuromorphic (brainlike) computing. (For those who don’t know, the human brain in addition to its capacity for memory is remarkably energy efficient.)

This is the first time I’ve seen research into memristors where software has been included. Disclaimer: There may be a lot more research of this type; I just haven’t seen it before. A March 24, 2023 news item on ScienceDaily announces research from Korea,

ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.

A March 24, 2023 Pohang University of Science & Technology (POSTECH) press release (also on EurekAlert), which originated the news item, provides more detail,

A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device [emphasis mine] using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED [organic light-emitting diode] displays. The new device has proven to be excellent in terms of performance and power efficiency.

Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.

The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.

Using this material, the researchers developed a novel synapse device [emphasis mine] composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.

The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, [emphasis mine] which verifies its potential application in high-accuracy AI systems in the future.

Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”

This study was published last week [March 2023] on the inside back cover of Advanced Electronic Materials [paper edition] and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT [Information and Communication Technologies] of Korea.

Here’s a link to and a citation for the paper,

Highly Linear and Symmetric Analog Neuromorphic Synapse Based on Metal Oxide Semiconductor Transistors with Self-Assembled Monolayer for High-Precision Neural Network Computatio by Seongmin Park, Suwon Seong, Gilsu Jeon, Wonjae Ji, Kyungmi Noh, Seyoung Kim, Yoonyoung Chun. Volume 9, Issue 3 March 2023 2200554 DOI: https://doi.org/10.1002/aelm.202200554 First published online: 29 December 2022

This paper is open access.

Also, there is another approach to using materials such as indium gallium zinc oxide (IGZO) for a memristor. That would be using biological cells as my June 6, 2023 posting, which features work on biological neural networks (BNNs), suggests in relation to creating robots that can perform brainlike computing.

Studying quantum conductance in memristive devices

A September 27, 2022 news item on phys.org provides an introduction to the later discussion of quantum effects in memristors,

At the nanoscale, the laws of classical physics suddenly become inadequate to explain the behavior of matter. It is precisely at this juncture that quantum theory comes into play, effectively describing the physical phenomena characteristic of the atomic and subatomic world. Thanks to the different behavior of matter on these length and energy scales, it is possible to develop new materials, devices and technologies based on quantum effects, which could yield a real quantum revolution that promises to innovate areas such as cryptography, telecommunications and computation.

The physics of very small objects, already at the basis of many technologies that we use today, is intrinsically linked to the world of nanotechnologies, the branch of applied science dealing with the control of matter at the nanometer scale (a nanometer is one billionth of a meter). This control of matter at the nanoscale is at the basis of the development of new electronic devices.

A September 27, 2022 Istituto Nazionale di Ricerca Metrologica (INRIM) press release (summary, PDF, and also on EurekAlert), which originated the news item, provides more information about the research,

Among these, memrisistors are considered promising devices for the realization of new computational architectures emulating functions of our brain, allowing the creation of increasingly efficient computation systems suitable for the development of the entire artificial intelligence sector, as recently shown by INRiM researchers in collaboration with several international universities and research institutes [1,2].

In this context, the EMPIR MEMQuD project, coordinated by INRiM, aims to study the quantum effects in such devices in which the electronic conduction properties can be manipulated allowing the observation of quantized conductivity phenomena at room temperature. In addition to analyzing the fundamentals and recent developments, the review work “Quantum Conductance in Memristive Devices: Fundamentals, Developments, and Applications” recently published in the prestigious international journal Advanced Materials (https://doi.org/10.1002/adma.202201248) analyzes how these effects can be used for a wide range of applications, from metrology to the development of next-generation memories and artificial intelligence.

Here’s a link to and a citation for the paper,

Quantum Conductance in Memristive Devices: Fundamentals, Developments, and Applications by Gianluca Milano, Masakazu Aono, Luca Boarino, Umberto Celano, Tsuyoshi Hasegawa, Michael Kozicki, Sayani Majumdar, Mariela Menghini, Enrique Miranda, Carlo Ricciardi, Stefan Tappertzhofen, Kazuya Terabe, Ilia Valov. Advanced Materials Volume 34, Issue32 August 11, 2022 2201248 DOI: https://doi.org/10.1002/adma.202201248 First published: 11 April 2022

This paper is open access.

You can find the EMPIR (European Metrology Programme for Innovation and Research) MEMQuD (quantum effects in memristive devices) project here, from the homepage,

Memristive devices are electrical resistance switches that couple ionics (i.e. dynamics of ions) with electronics. These devices offer a promising platform to observe quantum effects in air, at room temperature, and without an applied magnetic field. For this reason, they can be traced to fundamental physics constants fixed in the revised International System of Units (SI) for the realization of a quantum-based standard of resistance. However, as an emerging technology, memristive devices lack standardization and insights into the fundamental physics underlying its working principles.

The overall aim of the project is to investigate and exploit quantized conductance effects in memristive devices that operate reliably, in air and at room temperature. In particular, the project will focus on the development of memristive model systems and nanometrological characterization techniques at the nanoscale level of memristive devices, in order to better understand and control the quantized effects in memristive devices. Such an outcome would enable not only the development of neuromorphic systems but also the realization of a standard of resistance implementable on-chip for self-calibrating systems with zero-chain traceability in the spirit of the revised SI.

I’m starting to see mention of ‘neuromorphic computing’ in advertisements (specifically a Mercedes Benz car). I will have more about these first mentions of neuromorphic computing in consumer products in a future posting.