Tag Archives: RWTH Aachen University

Artificial synapse courtesy of nanowires

It looks like a popsicle to me,

Caption: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance. Credit: Forschungszentrum Jülich

Not a popsicle but a representation of a device (memristor) scientists claim mimics a biological nerve cell according to a December 5, 2018 news item on ScienceDaily,

Scientists from Jülich [Germany] together with colleagues from Aachen [Germany] and Turin [Italy] have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

A Dec. 5, 2018 Forschungszentrum Jülich press release (also on EurekAlert), which originated the news item, provides more details,

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

“With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.

In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.

Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.

In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.

Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.

The Italians have also written about the work in a December 4, 2018 news item for the Polytecnico di Torino’s inhouse magazine, PoliFlash’. I like the image they’ve used better as it offers a bit more detail and looks less like a popsicle. First, the image,

Courtesy: Polytecnico di Torino

Now, the news item, which includes some historical information about the memristor (Note: There is some repetition and links have been removed),

Emulating and understanding the human brain is one of the most important challenges for modern technology: on the one hand, the ability to artificially reproduce the processing of brain signals is one of the cornerstones for the development of artificial intelligence, while on the other the understanding of the cognitive processes at the base of the human mind is still far away.

And the research published in the prestigious journal Nature Communications by Gianluca Milano and Carlo Ricciardi, PhD student and professor, respectively, of the Applied Science and Technology Department of the Politecnico di Torino, represents a step forward in these directions. In fact, the study entitled “Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities” shows how it is possible to artificially emulate the activity of synapses, i.e. the connections between neurons that regulate the learning processes in our brain, in a single “nanowire” with a diameter thousands of times smaller than that of a hair.

It is a crystalline nanowire that takes the “memristor”, the electronic device able to artificially reproduce the functions of biological synapses, to a more performing level. Thanks to the use of nanotechnologies, which allow the manipulation of matter at the atomic level, it was for the first time possible to combine into one single device the synaptic functions that were individually emulated through specific devices. For this reason, the nanowire allows an extreme miniaturisation of the “memristor”, significantly reducing the complexity and energy consumption of the electronic circuits necessary for the implementation of learning algorithms.

Starting from the theorisation of the “memristor” in 1971 by Prof. Leon Chua – now visiting professor at the Politecnico di Torino, who was conferred an honorary degree by the University in 2015 – this new technology will not only allow smaller and more performing devices to be created for the implementation of increasingly “intelligent” computers, but is also a significant step forward for the emulation and understanding of the functioning of the brain.

“The nanowire memristor – said Carlo Ricciardirepresents a model system for the study of physical and electrochemical phenomena that govern biological synapses at the nanoscale. The work is the result of the collaboration between our research team and the RWTH University of Aachen in Germany, supported by INRiM, the National Institute of Metrological Research, and IIT, the Italian Institute of Technology.”

h.t for the Italian info. to Nanowerk’s Dec. 10, 2018 news item.

Here’s a link to and a citation for the paper,

Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities by Gianluca Milano, Michael Luebben, Zheng Ma, Rafal Dunin-Borkowski, Luca Boarino, Candido F. Pirri, Rainer Waser, Carlo Ricciardi, & Ilia Valov. Nature Communicationsvolume 9, Article number: 5151 (2018) DOI: https://doi.org/10.1038/s41467-018-07330-7 Published: 04 December 2018

This paper is open access.

Just use the search term “memristor” in the blog search engine if you’re curious about the multitudinous number of postings on the topic here.

Extending memristive theory

This is kind of fascinating. A German research team based at JARA (Jülich Aachen Research Alliance) is suggesting that memristive theory be extended beyond passive components in their paper about Resistive Memory Cells (ReRAM) which was recently published in Nature Communications. From the Apr. 26, 2013 news item on Azonano,

Resistive memory cells (ReRAM) are regarded as a promising solution for future generations of computer memories. They will dramatically reduce the energy consumption of modern IT systems while significantly increasing their performance.

Unlike the building blocks of conventional hard disk drives and memories, these novel memory cells are not purely passive components but must be regarded as tiny batteries. This has been demonstrated by researchers of Jülich Aachen Research Alliance (JARA), whose findings have now been published in the prestigious journal Nature Communications. The new finding radically revises the current theory and opens up possibilities for further applications. The research group has already filed a patent application for their first idea on how to improve data readout with the aid of battery voltage.

The Apr. 23, 2013 JARA news release, which originated the news item, provides some background information about data memory before going on to discuss the ReRAMs,

Conventional data memory works on the basis of electrons that are moved around and stored. However, even by atomic standards, electrons are extremely small. It is very difficult to control them, for example by means of relatively thick insulator walls, so that information will not be lost over time. This does not only limit storage density, it also costs a great deal of energy. For this reason, researchers are working feverishly all over the world on nanoelectronic components that make use of ions, i.e. charged atoms, for storing data. Ions are some thousands of times heavier that electrons and are therefore much easier to ‘hold down’. In this way, the individual storage elements can almost be reduced to atomic dimensions, which enormously improves the storage density.

Here’s how the ions behave in ReRAMs (from the news release),

In resistive switching memory cells (ReRAMs), ions behave on the nanometre scale in a similar manner to a battery. The cells have two electrodes, for example made of silver and platinum, at which the ions dissolve and then precipitate again. This changes the electrical resistance, which can be exploited for data storage. Furthermore, the reduction and oxidation processes also have another effect. They generate electric voltage. ReRAM cells are therefore not purely passive systems – they are also active electrochemical components. Consequently, they can be regarded as tiny batteries whose properties provide the key to the correct modelling and development of future data storage.

In complex experiments, the scientists from Forschungszentrum Jülich and RWTH Aachen University determined the battery voltage of typical representatives of ReRAM cells and compared them with theoretical values. This comparison revealed other properties (such as ionic resistance) that were previously neither known nor accessible. “Looking back, the presence of a battery voltage in ReRAMs is self-evident. But during the nine-month review process of the paper now published we had to do a lot of persuading, since the battery voltage in ReRAM cells can have three different basic causes, and the assignment of the correct cause is anything but trivial,” says Dr. Ilia Valov, the electrochemist in Prof. Rainer Waser’s research group.

This discovery could lead to optimizing ReRAMs and exploiting them in new applications (from the news release),

“The new findings will help to solve a central puzzle of international ReRAM research,” says Prof. Rainer Waser, deputy spokesman of the collaborative research centre SFB 917 ‘Nanoswitches’ established in 2011. In recent years, these puzzling aspects include unexplained long-term drift phenomena or systematic parameter deviations, which had been attributed to fabrication methods. “In the light of this new knowledge, it is possible to specifically optimize the design of the ReRAM cells, and it may be possible to discover new ways of exploiting the cells’ battery voltage for completely new applications, which were previously beyond the reach of technical possibilities,” adds Waser, whose group has been collaborating for years with companies such as Intel and Samsung Electronics in the field of ReRAM elements.

The part I found most interesting, given my interest in memristors, is this bit about extending the memristor theory, from the news release,

The new finding is of central significance, in particular, for the theoretical description of the memory components. To date, ReRAM cells have been described with the aid of the concept of memristors – a portmanteau word composed of “memory” and “resistor”. The theoretical concept of memristors can be traced back to Leon Chua in the 1970s. It was first applied to ReRAM cells by the IT company Hewlett-Packard in 2008. It aims at the permanent storage of information by changing the electrical resistance. The memristor theory leads to an important restriction. It is limited to passive components. “The demonstrated internal battery voltage of ReRAM elements clearly violates the mathematical construct of the memristor theory. This theory must be expanded to a whole new theory – to properly describe the ReRAM elements,” says Dr. Eike Linn, the specialist for circuit concepts in the group of authors. [emphases mine] This also places the development of all micro- and nanoelectronic chips on a completely new footing.

Here’s a link to and a citation for the paper,

Nanobatteries in redox-based resistive switches require extension of memristor theory by I. Valov,  E. Linn, S. Tappertzhofen,  S. Schmelzer,  J. van den Hurk,  F. Lentz,  & R. Waser. Nature Communications 4, Article number: 1771 doi:10.1038/ncomms2784 Published 23 April 2013

This paper is open access (as of this writing).

Here’s a list of my 2013 postings on memristors and memristive devices,

2.5M Euros for Ireland’s John Boland and his memristive nanowires (Apr. 4, 2013 posting)

How to use a memristor to create an artificial brain (Feb. 26, 2013 posting)

CeNSE (Central Nervous System of the Earth) and billions of tiny sensors from HP plus a memristor update (Feb. 7, 2013 posting)

For anyone who cares to search the blog, there are several more.

Is a philosophy of the Higgs and other physics particles a good idea?

Michael  Krämer of the RWTH Aachen University (Germany) muses about philosophy, the Higgs Boson, and more in a Mar. 24, 2013 posting on Jon Butterworth’s Life and Physics blog (Guardian science blogs; Note: A link has been removed),

Many of the great physicists of the 20th century have appreciated the importance of philosophy for science. Einstein, for example, wrote in a letter in 1944:

    I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today—and even professional scientists—seem to me like somebody who has seen thousands of trees but has never seen a forest.

At the same time, physics has always played a vital role in shaping ideas in modern philosophy. It appears, however, that we are now faced with the ruins of this beautiful marriage between physics and philosophy. Stephen Hawking has claimed recently that philosophy is “dead” because philosophers have not kept up with science …

Krämer is part of an interdisciplinary (physics and philosophy) project at the LHC (Large Hadron Collider at CERN [European Particle Physics Laboratory]), The Epistemology of the Large Hadron Collider. From the project home page (Note: A link has been removed),

This research collaboration works at the crossroads of physics, philosophy of science, and contemporary history of science. It aims at an epistemological analysis of the recently launched new accelerator experiment at CERN, the Large Hadron Collider (LHC). Central themes are (i) the mechanisms of generating the masses of the particles of the standard model, especially the Higgs-mechanism and the Higgs-particle the LHC has set out to detect; (ii) the ongoing research process with special emphasis on the interaction between a large experiment and a community of theoreticians; and (iii) the implications of an experiment that is characterized by its enormous complexity and the need to be highly selective in data gathering. With the heading “Epistemology of the LHC” the research group intends both a philosophical analysis of the theoretical structures and of the conditions of knowledge production, among them the criteria of acceptance, and a real-time monitoring of the ongoing physical development from the perspective of the history of science. Theresearch group has emerged from a collaboration between a High Energy Working group and the Interdisciplinary Centre for Science and Technology Studies and is based in Wuppertal but also involves external members and collaborators.

Krämer shares some of his ideas and the type of thinking generated when physicists and philosophers collide (I plead guilty to the word play; from Butterworth’s Guardian science blog),

… The relationship between experiment and theory (what impact does theoretical prejudice have on empirical findings?) or the role of models (how can we assess the uncertainty of a simplified representation of reality?) are scientific issues, but also issues from the foundation of philosophy of science. In that sense they are equally important for both fields, and philosophy may add a wider and critical perspective to the scientific discussion. And while not every particle physicist may be concerned with the ontological question of whether particles or fields are the more fundamental objects, our research practice is shaped by philosophical concepts. We do, for example, demand that a physical theory can be tested experimentally and thereby falsified, a criterion that has been emphasized by the philosopher Karl Popper already in 1934. The Higgs mechanism can be falsified, because it predicts how Higgs particles are produced and how they can be detected at the Large Hadron Collider.

On the other hand, some philosophers tell us that falsification is strictly speaking not possible: What if a Higgs property does not agree with the standard theory of particle physics? How do we know it is not influenced by some unknown and thus unaccounted factor, like a mysterious blonde walking past the LHC experiments and triggering the Higgs to decay? (This was an actual argument given in the meeting!)

The meeting Krämer is referring to is this one (from the meeting/conference website),

The first international conference and kick-off meeting of the German Society for Philosophy of Science/Gesellschaft für Wissenschaftsphilosophie (GWP) will take place from 11-14 March 2013 at the University of Hannover under the title:

How Much Philosophy in the Philosophy of Science?

Krämer then highlights some of the discussion that most interested in him (Note: A link has been removed),

… It is very hard for a philosopher to keep up with scientific progress, and how could one integrate various fields without having fully appreciated the essential features of the individual sciences? As Margaret Morrison from the University of Toronto pointed out in her talk, if philosophy steps back too far from the individual sciences, the account becomes too general and isolated from scientific practice. On the other hand, if philosophy is too close to an individual science, it may not be philosophy any longer.

I think philosophy of science should not consider itself primarily as a service to science, but rather identify and answer questions within its own domain. I certainly would not be concerned if my own research went unnoticed by biologists, chemists, or philosophers, as long as it advances particle physics. On the other hand, as Morrison pointed out, science does generate its own philosophical problems, and philosophy may provide some kind of broader perspective for understanding those problems.

It’s well worth reading Krämer’s full post for anyone who’s interested in how physicists (or Krämer) think about the role that philosophy could play (or not) in the field of physics.

The reference to Margaret Morrison from the University of Toronto (U of T) reminded me of the Bubble Chamber blog which is written by U of T historians and philosophers of science. Here’s a July 10, 2012 posting by Mike Thicke about the Higgs Boson and his response to philosopher Wayne Myrvold’s (University of Western Ontario) explanation of the statistics claims being made about the particle at that time,

We can all agree that reasoning and decision making in science is complicated. Scientists reason in many different contexts: in the lab, in their published papers, as career-minded professionals, as interested consumers of science, and as people going about their lives. It’s plausible to think that they reason in different ways in all of these contexts. When we’re discussing their reasoning as scientists, I believe distinguishing between the first three contexts is especially important. While Wayne’s explanation of the statistics behind the Higgs Boson discovery is very interesting, informative, and as far as I can tell correct, I think there are some confusions arising from his failure to make these distinctions.

Thicke does advise reading Myrvold’s July 4, 2012 posting before tackling his riposte.