Tag Archives: Moscow Institute of Physics and Technology (MIPT)

Scientometrics and science typologies

Caption: As of 2013, there were 7.8 million researchers globally, according to UNESCO. This means that 0.1 percent of the people in the world professionally do science. Their work is largely financed by governments, yet public officials are not themselves researchers. To help governments make sense of the scientific community, Russian mathematicians have devised a researcher typology. The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” Credit: Lion_on_helium/MIPT Press Office

A June 28, 2018 Moscow Institute of Physics and Technology (MIPT; Russia) press release (also on EurekAlert) announces some intriguing research,

Researchers in various fields, from psychology to economics, build models of human behavior and reasoning to categorize people. But it does not happen as often that scientists undertake an analysis to classify their own kind.

However, research evaluation, and therefore scientist stratification as well, remain highly relevant. Six years ago, the government outlined the objective that Russian scientists should have 50 percent more publications in Web of Science- and Scopus-indexed journals. As of 2011, papers by researchers from Russia accounted for 1.66 percent of publications globally. By 2015, this number was supposed to reach 2.44%. It did grow but this has also sparked a discussion in the scientific community about the criteria used for evaluating research work.

The most common way of gauging the impact of a researcher is in terms of his or her publications. Namely, whether they are in a prestigious journal and how many times they have been cited. As with any good idea, however, one runs the risk of overdoing it. In 2005, U.S. physicist Jorge Hirsch proposed his h-index, which takes into account the number of publications by a given researcher and the number of times they have been cited. Now, scientists are increasingly doubting the adequacy of using bibliometric data as the sole independent criterion for evaluating research work. One obvious example of a flaw of this metric is that a paper can be frequently cited to point out a mistake in it.

Scientists are increasingly under pressure to publish more often. Research that might have reasonably been published in one paper is being split up into stages for separate publication. This calls for new approaches to the evaluation of work done by research groups and individual authors. Similarly, attempts to systematize the existing methods in scientometrics and stratify scientists are becoming more relevant, too. This is arguably even more important for Russia, where the research reform has been stretching for years.

One of the challenges in scientometrics is identifying the prominent types of researchers in different fields. A typology of scientists has been proposed by Moscow Institute of Physics and Technology Professor Pavel Chebotarev, who also heads the Laboratory of Mathematical Methods for Multiagent Systems Analysis at the Institute of Control Sciences of the Russian Academy of Sciences, and Ilya Vasilyev, a master’s student at MIPT.

In their paper, the two authors determined distinct types of scientists based on an indirect analysis of the style of research work, how papers are received by colleagues, and what impact they make. A further question addressed by the authors is to what degree researcher typology is affected by the scientific discipline.

“Each science has its own style of work. Publication strategies and citation practices vary, and leaders are distinguished in different ways,” says Chebotarev. “Even within a given discipline, things may be very different. This means that it is, unfortunately, not possible to have a universal system that would apply to anyone from a biologist to a philologist.”

“All of the reasonable systems that already exist are adjusted to particular disciplines,” he goes on. “They take into account the criteria used by the researchers themselves to judge who is who in their field. For example, scientists at the Institute for Nuclear Research of the Russian Academy of Sciences are divided into five groups based on what research they do, and they see a direct comparison of members of different groups as inadequate.”

The study was based on the citation data from the Google Scholar bibliographic database. To identify researcher types, the authors analyzed citation statistics for a large number of scientists, isolating and interpreting clusters of similar researchers.

Chebotarev and Vasilyev looked at the citation statistics for four groups of researchers returned by a Google Scholar search using the tags “Mathematics,” “Physics,” and “Psychology.” The first 515 and 556 search hits were considered in the case of physicists and psychologists, respectively. The authors studied two sets of mathematicians: the top 500 hits and hit Nos. 199-742. The four sets thus included frequently cited scientists from three disciplines indicating their general field of research in their profiles. Citation dynamics over each scientist’s career were examined using a range of indexes.

The authors initially identified three clusters, which they tentatively labeled as “leaders,” “successors,” and “toilers.” The leaders are experienced scientists widely recognized in their fields for research that has secured an annual citation count increase for them. The successors are young scientists who have more citations than toilers. The latter earn their high citation metrics owing to yearslong work, but they lack the illustrious scientific achievements.

Among the top 500 researchers indicating mathematics as their field of interest, 52 percent accounted for toilers, with successors and leaders making up 25.8 and 22.2 percent, respectively.

For physicists, the distribution was slightly different, with 48.5 percent of the set classified as toilers, 31.7 percent as successors, and 19.8 percent as leaders. That is, there were more successful young scientists, at the expense of leaders and toilers. This may be seen as a confirmation of the solitary nature of mathematical research, as compared with physics.

Finally, in the case of psychologists, toilers made up 47.7 percent of the set, with successors and leaders accounting for 18.3 and 34 percent. Comparing the distributions for the three disciplines investigated in the study, the authors conclude that there are more young achievers among those doing mathematical research.

A closer look enabled the authors to determine a more fine-grained cluster structure, which turned out to be remarkably similar for mathematicians and physicists. In particular, they identified a cluster of the youngest and most successful researchers, dubbed “precocious,” making up 4 percent of the mathematicians and 4.3 percent of the physicists in the set, along with the “youth” — successful researchers whose debuts were somewhat less dramatic: 29 and 31.7 percent of scientists doing math and physics research, respectively. Two further clusters were interpreted as recognized scientific authorities, or “luminaries,” and experienced researchers who have not seen an appreciable growth in the number of citations recently. Luminaries and the so-called inertia accounted for 52 and 15 percent of mathematicians and 50 and 14 percent of physicists, respectively.

There is an alternative way of clustering physicists, which recognizes a segment of researchers, who “caught the wave.” The authors suggest this might happen after joining major international research groups.

Among psychologists, 18.3 percent have been classified as precocious, though not as young as the physicists and mathematicians in the corresponding group. The most experienced and respected psychology researchers account for 22.5 percent, but there is no subdivision into luminaries and inertia, because those actively cited generally continue to be. Relatively young psychologists make up 59.2 percent of the set. The borders between clusters are relatively blurred in the case of psychology, which might be a feature of the humanities, according to the authors.

“Our pilot study showed even more similarity than we’d expected in how mathematicians and physicists are clustered,” says Chebotarev. “Whereas with psychology, things are noticeably different, yet the breakdown is slightly closer to math than physics. Perhaps, there is a certain connection between psychology and math after all, as some people say.”

“The next stage of this research features more disciplines. Hopefully, we will be ready to present the new results soon,” he concludes.

I think that they are attempting to create a new way of measuring scientific progress (scientometrics) by establishing a more representative means of measuring individual contributions based on the analysis they provide of the ways in which these ‘typologies’ are expressed across various disciplines.

For anyone who wants to investigate further, you will need to be able to read Russian. You can download the paper from here on MathNet.ru,.

Here’s my best attempt at a citation for the paper,

Making a typology of scientists on the basis of bibliometric data by I. Vasilyev, P. Yu. Chebotarev. Large-scale System Control (UBS), 2018, Issue 72, Pages 138–195 (Mi ubs948)

I’m glad to see this as there is a fair degree of dissatisfaction about the current measures for scientific progress used in any number of reports on the topic. As far as I can tell, this dissatisfaction is felt internationally.

A computer that intuitively predicts a molecule’s chemical properties

First, we have emotional artificial intelligence from MIT (Massachusetts Institute of Technology) with their Kismet [emotive AI] project and now we have intuitive computers according to an Oct. 14, 2016 news item on Nanowerk,

Scientists from Moscow Institute of Physics and Technology (MIPT)’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs.

An Oct. 14, 2016 Moscow Institute of Physics and Technology press release (also on EurekAlert), which originated the news item, expands on the theme,

Imagine that you were to develop a new drug. Designing a drug with predetermined properties is called drug-design. Once a drug has entered the human body, it needs to take effect on the cause of a disease. On a molecular level this is a malfunction of some proteins and their encoding genes. In drug-design these are called targets. If a drug is antiviral, it must somehow prevent the incorporation of viral DNA into human DNA. In this case the target is viral protein. The structure of the incorporating protein is known, and we also even know which area is the most important – the active site. If we insert a molecular “plug” then the viral protein will not be able to incorporate itself into the human genome and the virus will die. It boils down to this: you find the “plug” – you have your drug.

But how can we find the molecules required? Researchers use an enormous database of substances for this. There are special programs capable of finding a needle in a haystack; they use quantum chemistry approximations to predict the place and force of attraction between a molecular “plug” and a protein. However, databases only store the shape of a substance; information about atom and bond states is also needed for an accurate prediction. Determining these states is what Knodle does. With the help of the new technology, the search area can be reduced from hundreds of thousands to just a hundred. These one hundred can then be tested to find drugs such as Reltagravir – which has actively been used for HIV prevention since 2011.

From science lessons at school everyone is used to seeing organic substances as letters with sticks (substance structure), knowing that in actual fact there are no sticks. Every stick is a bond between electrons which obeys the laws of quantum chemistry. In the case of one simple molecule, like the one in the diagram [diagram follows], the experienced chemist intuitively knows the hybridizations of every atom (the number of neighboring atoms which it is connected to) and after a few hours looking at reference books, he or she can reestablish all the bonds. They can do this because they have seen hundreds and hundreds of similar substances and know that if oxygen is “sticking out like this”, it almost certainly has a double bond. In their research, Maria Kadukova, a MIPT student, and Sergei Grudinin, a researcher from Inria research center located in Grenoble, France, decided to pass on this intuition to a computer by using machine learning.

Compare “A solid hollow object with a handle, opening at the top and an elongation at the side, at the end of which there is another opening” and “A vessel for the preparation of tea”. Both of them describe a teapot rather well, but the latter is simpler and more believable. The same is true for machine learning, the best algorithm for learning is the simplest. This is why the researchers chose to use a nonlinear support vector machines (SVM), a method which has proven itself in recognizing handwritten text and images. On the input it was given the positions of neighboring atoms and on the output collected hybridization.

Good learning needs a lot of examples and the scientists did this using 7605 substances with known structures and atom states. “This is the key advantage of the program we have developed, learning from a larger database gives better predictions. Knodle is now one step ahead of similar programs: it has a margin of error of 3.9%, while for the closest competitor this figure is 4.7%”, explains Maria Kadukova. And that is not the only benefit. The software package can easily be modified for a specific problem. For example, Knodle does not currently work with substances containing metals, because those kind of substances are rather rare. But if it turns out that a drug for Alzheimer’s is much more effective if it has metal, the only thing needed to adapt the program is a database with metallic substances. We are now left to wonder what new drug will be found to treat a previously incurable disease.

Scientists from MIPT's Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom's hybridization, bond orders and functional groups' annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Scientists from MIPT’s Research Center for Molecular Mechanisms of Aging and Age-Related Diseases together with Inria research center, Grenoble, France have developed a software package called Knodle to determine an atom’s hybridization, bond orders and functional groups’ annotation in molecules. The program streamlines one of the stages of developing new drugs. Credit: MIPT Press Office

Here’s a link to and a citation for the paper,

Knodle: A Support Vector Machines-Based Automatic Perception of Organic Molecules from 3D Coordinates by Maria Kadukova and Sergei Grudinin. J. Chem. Inf. Model., 2016, 56 (8), pp 1410–1419 DOI: 10.1021/acs.jcim.5b00512 Publication Date (Web): July 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Deriving graphene-like films from salt

This research comes from Russia (mostly). A July 29, 2016 news item on ScienceDaily describes a graphene-like structure derived from salt,

Researchers from Moscow Institute of Physics and Technology (MIPT), Skolkovo Institute of Science and Technology (Skoltech), the Technological Institute for Superhard and Novel Carbon Materials (TISNCM), the National University of Science and Technology MISiS (Russia), and Rice University (USA) used computer simulations to find how thin a slab of salt has to be in order for it to break up into graphene-like layers. Based on the computer simulation, they derived the equation for the number of layers in a crystal that will produce ultrathin films with applications in nanoelectronics. …

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

A July 29, 2016 Moscow Institute of Physics and Technology press release on EurekAlert, which originated the news item,  provides more technical detail,

From 3D to 2D

Unique monoatomic thickness of graphene makes it an attractive and useful material. Its crystal lattice resembles a honeycombs, as the bonds between the constituent atoms form regular hexagons. Graphene is a single layer of a three-dimensional graphite crystal and its properties (as well as properties of any 2D crystal) are radically different from its 3D counterpart. Since the discovery of graphene, a large amount of research has been directed at new two-dimensional materials with intriguing properties. Ultrathin films have unusual properties that might be useful for applications such as nano- and microelectronics.

Previous theoretical studies suggested that films with a cubic structure and ionic bonding could spontaneously convert to a layered hexagonal graphitic structure in what is known as graphitisation. For some substances, this conversion has been experimentally observed. It was predicted that rock salt NaCl can be one of the compounds with graphitisation tendencies. Graphitisation of cubic compounds could produce new and promising structures for applications in nanoelectronics. However, no theory has been developed that would account for this process in the case of an arbitrary cubic compound and make predictions about its conversion into graphene-like salt layers.

For graphitisation to occur, the crystal layers need to be reduced along the main diagonal of the cubic structure. This will result in one crystal surface being made of sodium ions Na? and the other of chloride ions Cl?. It is important to note that positive and negative ions (i.e. Na? and Cl?)–and not neutral atoms–occupy the lattice points of the structure. This generates charges of opposite signs on the two surfaces. As long as the surfaces are remote from each other, all charges cancel out, and the salt slab shows a preference for a cubic structure. However, if the film is made sufficiently thin, this gives rise to a large dipole moment due to the opposite charges of the two crystal surfaces. The structure seeks to get rid of the dipole moment, which increases the energy of the system. To make the surfaces charge-neutral, the crystal undergoes a rearrangement of atoms.

Experiment vs model

To study how graphitisation tendencies vary depending on the compound, the researchers examined 16 binary compounds with the general formula AB, where A stands for one of the four alkali metals lithium Li, sodium Na, potassium K, and rubidium Rb. These are highly reactive elements found in Group 1 of the periodic table. The B in the formula stands for any of the four halogens fluorine F, chlorine Cl, bromine Br, and iodine I. These elements are in Group 17 of the periodic table and readily react with alkali metals.

All compounds in this study come in a number of different structures, also known as crystal lattices or phases. If atmospheric pressure is increased to 300,000 times its normal value, an another phase (B2) of NaCl (represented by the yellow portion of the diagram) becomes more stable, effecting a change in the crystal lattice. To test their choice of methods and parameters, the researchers simulated two crystal lattices and calculated the pressure that corresponds to the phase transition between them. Their predictions agree with experimental data.

Just how thin should it be?

The compounds within the scope of this study can all have a hexagonal, “graphitic”, G phase (the red in the diagram) that is unstable in 3D bulk but becomes the most stable structure for ultrathin (2D or quasi-2D) films. The researchers identified the relationship between the surface energy of a film and the number of layers in it for both cubic and hexagonal structures. They graphed this relationship by plotting two lines with different slopes for each of the compounds studied. Each pair of lines associated with one compound has a common point that corresponds to the critical slab thickness that makes conversion from a cubic to a hexagonal structure energetically favourable. For example, the critical number of layers was found to be close to 11 for all sodium salts and between 19 and 27 for lithium salts.

Based on this data, the researchers established a relationship between the critical number of layers and two parameters that determine the strength of the ionic bonds in various compounds. The first parameter indicates the size of an ion of a given metal–its ionic radius. The second parameter is called electronegativity and is a measure of the ? atom’s ability to attract the electrons of element B. Higher electronegativity means more powerful attraction of electrons by the atom, a more pronounced ionic nature of the bond, a larger surface dipole, and a lower critical slab thickness.

And there’s more

Pavel Sorokin, Dr. habil., [sic] is head of the Laboratory of New Materials Simulation at TISNCM. He explains the importance of the study, ‘This work has already attracted our colleagues from Israel and Japan. If they confirm our findings experimentally, this phenomenon [of graphitisation] will provide a viable route to the synthesis of ultrathin films with potential applications in nanoelectronics.’

The scientists intend to broaden the scope of their studies by examining other compounds. They believe that ultrathin films of different composition might also undergo spontaneous graphitisation, yielding new layered structures with properties that are even more intriguing.

Here’s a link to and a citation for the paper,

Ionic Graphitization of Ultrathin Films of Ionic Compounds by A. G. Kvashnin, E. Y. Pashkin, B. I. Yakobson, and P. B. Sorokin. J. Phys. Chem. Lett., 2016, 7 (14), pp 2659–2663 DOI: 10.1021/acs.jpclett.6b01214 Publication Date (Web): June 23, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Plastic memristors for neural networks

There is a very nice explanation of memristors and computing systems from the Moscow Institute of Physics and Technology (MIPT). First their announcement, from a Jan. 27, 2016 news item on ScienceDaily,

A group of scientists has created a neural network based on polymeric memristors — devices that can potentially be used to build fundamentally new computers. These developments will primarily help in creating technologies for machine vision, hearing, and other machine sensory systems, and also for intelligent control systems in various fields of applications, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks – polymer-based memristors – and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments conducted at the Nano-, Bio-, Information and Cognitive Sciences and Technologies (NBIC) centre at the Kurchatov Institute by a joint team of Russian and Italian scientists demonstrated that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified logical operations.

A Jan. 27, 2016 MIPT press release on EurekAlert, which originated the news item, offers an explanation of memristors and a description of the research,

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it, therefore it constantly changes its properties under the influence of an external signal: a memristor has a memory and at the same time is also able to change data encoded by its resistance state! In this sense, a memristor is similar to a synapse – a connection between two neurons in the brain that is able, with a high level of plasticity, to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a “true” neural network, and the physical properties of memristors mean that at the very minimum they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is “in theory”, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon, even in the next five years.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to go through many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm’s Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are of course very small units, but in this case it is the contrast that is most significant: 0.1 μA to 5 μA is a difference of fifty times! This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied to it, and after a certain number of repetitions all the internal parameters of the device (namely memristive resistance) reconfigure themselves, i.e. they are “trained” to give the correct answer.

The scientists demonstrated that after about a dozen attempts their new memristive network is capable of performing NAND logical operations, and then it is also able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not an element that is ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases: the first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

The press release goes to explain what the researchers mean when they mention a fundamentally different computer,

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective, whereas specialists normally use an entirely different approach – an approach that is based on the principle of organizing computer operations. The computers that we are used to, whether they be tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture; without going into too much detail, they are devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions that command certain operations to be performed with data. Data are also stored in the memory* and are retrieved from it (and also written to it) in accordance with the program; the program’s instructions are performed by the processor. There may be several processors, they can work in parallel, data can be stored in a variety of ways – but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this particular principle and this is partly the reason why most people are not even aware that there may be other types of computer systems – without processors and memory.

*) if physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a “conventional” computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (this is purely hypothetical at the moment: it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them are able to work in parallel (simultaneously), which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step by step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases this is not so critical, but in certain cases it can be.

Devices that do not simply imitate the function of neural networks, but are fundamentally the same could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to “think” about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Bravo to the writer!

Here’s a link to and a citation for the paper,

Hardware elementary perceptron based on polyaniline memristive devices by V.A. Demin. V. V. Erokhin, A.V. Emelyanov, S. Battistoni, G. Baldi, S. Iannotta, P.K. Kashkarov, M.V. Kovalchuk. Organic Electronics Volume 25, October 2015, Pages 16–20 doi:10.1016/j.orgel.2015.06.015

This paper is behind a paywall.