Tag Archives: memristors

Mimicking the brain with an evolvable organic electrochemical transistor

Simone Fabiano and Jennifer Gerasimov have developed a learning transistor that mimics the way synapses function. Credit: Thor Balkhed

At a guess, this was originally a photograph which has been passed through some sort of programme to give it a paintinglike quality.

Moving onto the research, I don’t see any reference to memristors (another of the ‘devices’ that mimics the human brain) so perhaps this is an entirely different way to mimic human brains? A February 5, 2019 news item on ScienceDaily announces the work from Linkoping University (Sweden),

A new transistor based on organic materials has been developed by scientists at Linköping University. It has the ability to learn, and is equipped with both short-term and long-term memory. The work is a major step on the way to creating technology that mimics the human brain.

A February 5, 2019 Linkoping University press release (also on EurekAlert), which originated the news item, describes this ‘nonmemristor’ research into brainlike computing in more detail,

Until now, brains have been unique in being able to create connections where there were none before. In a scientific article in Advanced Science, researchers from Linköping University describe a transistor that can create a new connection between an input and an output. They have incorporated the transistor into an electronic circuit that learns how to link a certain stimulus with an output signal, in the same way that a dog learns that the sound of a food bowl being prepared means that dinner is on the way.

A normal transistor acts as a valve that amplifies or dampens the output signal, depending on the characteristics of the input signal. In the organic electrochemical transistor that the researchers have developed, the channel in the transistor consists of an electropolymerised conducting polymer. The channel can be formed, grown or shrunk, or completely eliminated during operation. It can also be trained to react to a certain stimulus, a certain input signal, such that the transistor channel becomes more conductive and the output signal larger.

“It is the first time that real time formation of new electronic components is shown in neuromorphic devices”, says Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.

The channel is grown by increasing the degree of polymerisation of the material in the transistor channel, thereby increasing the number of polymer chains that conduct the signal. Alternatively, the material may be overoxidised (by applying a high voltage) and the channel becomes inactive. Temporary changes of the conductivity can also be achieved by doping or dedoping the material.

“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other”, says Jennifer Gerasimov, postdoc in organic nanoelectronics and one of the authors of the article.

By changing the input signal, the strength of the transistor response can be modulated across a wide range, and connections can be created where none previously existed. This gives the transistor a behaviour that is comparable with that of the synapse, or the communication interface between two brain cells.

It is also a major step towards machine learning using organic electronics. Software-based artificial neural networks are currently used in machine learning to achieve what is known as “deep learning”. Software requires that the signals are transmitted between a huge number of nodes to simulate a single synapse, which takes considerable computing power and thus consumes considerable energy.

“We have developed hardware that does the same thing, using a single electronic component”, says Jennifer Gerasimov.

“Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells”, confirms Simone Fabiano.

The transistor channel has not been constructed using the most common polymer used in organic electronics, PEDOT, but instead using a polymer of a newly-developed monomer, ETE-S, produced by Roger Gabrielsson, who also works at the Laboratory of Organic Electronics and is one of the authors of the article. ETE-S has several unique properties that make it perfectly suited for this application – it forms sufficiently long polymer chains, is water-soluble while the polymer form is not, and it produces polymers with an intermediate level of doping. The polymer PETE-S is produced in its doped form with an intrinsic negative charge to balance the positive charge carriers (it is p-doped).

Here’s a link to and a citation for the paper,

An Evolvable Organic Electrochemical Transistor for Neuromorphic Applications by Jennifer Y. Gerasimov, Roger Gabrielsson, Robert Forchheimer, Eleni Stavrinidou, Daniel T. Simon, Magnus Berggren, Simone Fabiano. Advanced Science DOI: https://doi.org/10.1002/advs.201801339 First published: 04 February 2019

This paper is open access.

There’s one other image associated this work that I want to include here,

Synaptic transistor. Sketch of the organic electrochemical transistor, formed by electropolymerization of ETE‐S in the transistor channel. The electrolyte solution is confined by a PDMS well (not shown). In this work, we define the input at the gate as the presynaptic signal and the response at the drain as the postsynaptic terminal. During operation, the drain voltage is kept constant while the gate is pulsed. Synaptic weight is defined as the amplitude of the current response to a standard gate voltage characterization pulse of −0.1 V. Different memory functionalities are accessible by applying gate voltage Courtesy: Linkoping University Researchers

An artificial synapse tuned by light, a ferromagnetic memristor, and a transparent, flexible artificial synapse

Down the memristor rabbit hole one more time.* I started out with news about two new papers and inadvertently found two more. In a bid to keep this posting to a manageable size, I’m stopping at four.

UK

In a June 19, 2019 Nanowerk Spotlight article, Dr. Neil Kemp discusses memristors and some of his latest work (Note: A link has been removed),

Memristor (or memory resistors) devices are non-volatile electronic memory devices that were first theorized by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated. This was in 2008 when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behaviour.

The high interest in memristor devices also stems from the fact that these devices emulate the memory and learning properties of biological synapses. i.e. the electrical resistance value of the device is dependent on the history of the current flowing through it.

There is a huge effort underway to use memristor devices in neuromorphic computing applications and it is now reasonable to imagine the development of a new generation of artificial intelligent devices with very low power consumption (non-volatile), ultra-fast performance and high-density integration.

These discoveries come at an important juncture in microelectronics, since there is increasing disparity between computational needs of Big Data, Artificial Intelligence (A.I.) and the Internet of Things (IoT), and the capabilities of existing computers. The increases in speed, efficiency and performance of computer technology cannot continue in the same manner as it has done since the 1960s.

To date, most memristor research has focussed on the electronic switching properties of the device. However, for many applications it is useful to have an additional handle (or degree of freedom) on the device to control its resistive state. For example memory and processing in the brain also involves numerous chemical and bio-chemical reactions that control the brain structure and its evolution through development.

To emulate this in a simple solid-state system composed of just switches alone is not possible. In our research, we are interested in using light to mediate this essential control.

We have demonstrated that light can be used to make short and long-term memory and we have shown how light can modulate a special type of learning, called spike timing dependent plasticity (STDP). STDP involves two neuronal spikes incident across a synapse at the same time. Depending on the relative timing of the spikes and their overlap across the synaptic cleft, the connection strength is other strengthened or weakened.

In our earlier work, we were only able to achieve to small switching effects in memristors using light. In our latest work (Advanced Electronic Materials, “Percolation Threshold Enables Optical Resistive-Memory Switching and Light-Tuneable Synaptic Learning in Segregated Nanocomposites”), we take advantage of a percolating-like nanoparticle morphology to vastly increase the magnitude of the switching between electronic resistance states when light is incident on the device.

We have used an inhomogeneous percolating network consisting of metallic nanoparticles distributed in filamentary-like conduction paths. Electronic conduction and the resistance of the device is very sensitive to any disruption of the conduction path(s).

By embedding the nanoparticles in a polymer that can expand or contract with light the conduction pathways are broken or re-connected causing very large changes in the electrical resistance and memristance of the device.

Our devices could lead to the development of new memristor-based artificial intelligence systems that are adaptive and reconfigurable using a combination of optical and electronic signalling. Furthermore, they have the potential for the development of very fast optical cameras for artificial intelligence recognition systems.

Our work provides a nice proof-of-concept but the materials used means the optical switching is slow. The materials are also not well suited to industry fabrication. In our on-going work we are addressing these switching speed issues whilst also focussing on industry compatible materials.

Currently we are working on a new type of optical memristor device that should give us orders of magnitude improvement in the optical switching speeds whilst also retaining a large difference between the resistance on and off states. We hope to be able to achieve nanosecond switching speeds. The materials used are also compatible with industry standard methods of fabrication.

The new devices should also have applications in optical communications, interfacing and photonic computing. We are currently looking for commercial investors to help fund the research on these devices so that we can bring the device specifications to a level of commercial interest.

If you’re interested in memristors, Kemp’s article is well written and quite informative for nonexperts, assuming of course you can tolerate not understanding everything perfectly.

Here are links and citations for two papers. The first is the latest referred to in the article, a May 2019 paper and the second is a paper appearing in July 2019.

Percolation Threshold Enables Optical Resistive‐Memory Switching and Light‐Tuneable Synaptic Learning in Segregated Nanocomposites by Ayoub H. Jaafar, Mary O’Neill, Stephen M. Kelly, Emanuele Verrelli, Neil T. Kemp. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201900197 First published: 28 May 2019

Wavelength dependent light tunable resistive switching graphene oxide nonvolatile memory devices by Ayoub H.Jaafar, N.T.Kemp. DOI: https://doi.org/10.1016/j.carbon.2019.07.007 Carbon Available online 3 July 2019

The first paper (May 2019) is definitely behind a paywall and the second paper (July 2019) appears to be behind a paywall.

Dr. Kemp’s work has been featured here previously in a January 3, 2018 posting in the subsection titled, Shining a light on the memristor.

China

This work from China was announced in a June 20, 2019 news item on Nanowerk,

Memristors, demonstrated by solid-state devices with continuously tunable resistance, have emerged as a new paradigm for self-adaptive networks that require synapse-like functions. Spin-based memristors offer advantages over other types of memristors because of their significant endurance and high energy effciency.

However, it remains a challenge to build dense and functional spintronic memristors with structures and materials that are compatible with existing ferromagnetic devices. Ta/CoFeB/MgO heterostructures are commonly used in interfacial PMA-based [perpendicular magnetic anisotropy] magnetic tunnel junctions, which exhibit large tunnel magnetoresistance and are implemented in commercial MRAM [magnetic random access memory] products.

“To achieve the memristive function, DW is driven back and forth in a continuous manner in the CoFeB layer by applying in-plane positive or negative current pulses along the Ta layer, utilizing SOT that the current exerts on the CoFeB magnetization,” said Shuai Zhang, a coauthor in the paper. “Slowly propagating domain wall generates a creep in the detection area of the device, which yields a broad range of intermediate resistive states in the AHE [anomalous Hall effect] measurements. Consequently, AHE resistance is modulated in an analog manner, being controlled by the pulsed current characteristics including amplitude, duration, and repetition number.”

“For a follow-up study, we are working on more neuromorphic operations, such as spike-timing-dependent plasticity and paired pulsed facilitation,” concludes You. …

Here’s are links to and citations for the paper (Note: It’s a little confusing but I believe that one of the links will take you to the online version, as for the ‘open access’ link, keep reading),

A Spin–Orbit‐Torque Memristive Device by Shuai Zhang, Shijiang Luo, Nuo Xu, Qiming Zou, Min Song, Jijun Yun, Qiang Luo, Zhe Guo, Ruofan Li, Weicheng Tian, Xin Li, Hengan Zhou, Huiming Chen, Yue Zhang, Xiaofei Yang, Wanjun Jiang, Ka Shen, Jeongmin Hong, Zhe Yuan, Li Xi, Ke Xia, Sayeef Salahuddin, Bernard Dieny, Long You. Advanced Electronic Materials Volume 5, Issue 4 April 2019 (print version) 1800782 DOI: https://doi.org/10.1002/aelm.201800782 First published [online]: 30 January 2019 Note: there is another DOI, https://doi.org/10.1002/aelm.201970022 where you can have open access to Memristors: A Spin–Orbit‐Torque Memristive Device (Adv. Electron. Mater. 4/2019)

The paper published online in January 2019 is behind a paywall and the paper (almost the same title) published in April 2019 has a new DOI and is open access. Final note: I tried accessing the ‘free’ paper and opened up a free file for the artwork featuring the work from China on the back cover of the April 2019 of Advanced Electronic Materials.

Korea

Usually when I see the words transparency and flexibility, I expect to see graphene is one of the materials. That’s not the case for this paper (link to and citation for),

Transparent and flexible photonic artificial synapse with piezo-phototronic modulator: Versatile memory capability and higher order learning algorithm by Mohit Kumar, Joondong Kim, Ching-Ping Wong. Nano Energy Volume 63, September 2019, 103843 DOI: https://doi.org/10.1016/j.nanoen.2019.06.039 Available online 22 June 2019

Here’s the abstract for the paper where you’ll see that the material is made up of zinc oxide silver nanowires,

An artificial photonic synapse having tunable manifold synaptic response can be an essential step forward for the advancement of novel neuromorphic computing. In this work, we reported the development of highly transparent and flexible two-terminal ZnO/Ag-nanowires/PET photonic artificial synapse [emphasis mine]. The device shows purely photo-triggered all essential synaptic functions such as transition from short-to long-term plasticity, paired-pulse facilitation, and spike-timing-dependent plasticity, including in the versatile memory capability. Importantly, strain-induced piezo-phototronic effect within ZnO provides an additional degree of regulation to modulate all of the synaptic functions in multi-levels. The observed effect is quantitatively explained as a dynamic of photo-induced electron-hole trapping/detraining via the defect states such as oxygen vacancies. We revealed that the synaptic functions can be consolidated and converted by applied strain, which is not previously applied any of the reported synaptic devices. This study will open a new avenue to the scientific community to control and design highly transparent wearable neuromorphic computing.

This paper is behind a paywall.

Defending nanoelectronics from cyber attacks

There’s a new program at the University of Stuttgart (Germany) and their call for projects was recently announced. First, here’s a description of the program in a May 30, 2019 news item on Nanowerk,

Today’s societies critically depend on electronic systems. Past spectacular cyber-attacks have clearly demonstrated the vulnerability of existing systems and the need to prevent such attacks in the future. The majority of available cyber-defenses concentrate on protecting the software part of electronic systems or their communication interfaces.

However, manufacturing technology advancements and the increasing hardware complexity provide a large number of challenges so that the focus of attackers has shifted towards the hardware level. We saw already evidence for powerful and successful hardware-level attacks, including Rowhammer, Meltdown and Spectre.

These attacks happened on products built using state-of-the-art microelectronic technology, however, we are facing completely new security challenges due to the ongoing transition to radically new types of nanoelectronic devices, such as memristors, spintronics, or carbon nanotubes and graphene based transistors.

The use of such emerging nanotechnologies is inevitable to address the key challenges related to energy efficiency, computing power and performance. Therefore, the entire industry, are switching to emerging nano-electronics alongside scaled CMOS technologies in heterogeneous integrated systems.

These technologies come with new properties and also facilitate the development of radically different computer architectures. The new technologies and architectures provide new opportunities for achieving security targets, but also raise questions about their vulnerabilities to new types of hardware attacks.

A May 28, 2019 University of Stuttgart press release provides more information about the program and the call for projects,

Whether it’s cars, industrial plants or the government network, spectacular cyber attacks over the past few months have shown how vulnerable modern electronic systems are. The aim of the new Priority Program “Nano Security”, which is coordinated by the University of Stuttgart, is protecting you and preventing the cyber attacks of the future. The program, which is funded by the German Research Foundation (DFG), emphasizes making the hardware into a reliable foundation of a system or a layer of security.

The challenges of nanoelectronics

Completely new challenges also emerge as a result of the switch to radically new nanoelectronic components, which for example are used to master the challenges of the future in terms of energy efficiency, computing power and secure data transmission. For example, memristors (components which are not just used to store information but also function as logic modules), the spintronics, which exploit quantum-mechanical effects, or carbon nanotubes.

The new technologies, as well as the fundamentally different computer architecture associated with them, offer new opportunities for cryptographic primitives in order to achieve an even more secure data transmission. However, they also raise questions about their vulnerability to new types of hardware attacks.

The problem is part of the solution

In this context, a better understanding should be developed of what consequences the new nanoelectronic technologies have for the security of circuits and systems as part of the new Priority Program. Here, the hardware is not just thought of as part of the problem but also as an important and necessary part of the solution to security problems. The starting points here for example are the hardware-based generation of cryptographic keys, the secure storage and processing of sensitive data, and the isolation of system components which is guaranteed by the hardware. Lastly, it should be ensured that an attack cannot be spread further by the system.

In this process, the scientists want to assess the possible security risks and weaknesses which stem from the new type of nanoelectronics. Furthermore, they want to develop innovative approaches for system security which are based on nanoelectronics as a security anchor.

The Priority Program promotes cooperation between scientists, who develop innovative security solutions for the computer systems of the future on different levels of abstraction. Likewise, it makes methods available to system designers to keep ahead in the race between attackers and security measures over the next few decades.

The call has started

The DFG Priority Program “Nano Security. From Nano-Electronics to Secure Systems“ (SPP 2253) is scheduled to last for a period of six years. The call for projects for the first three-year funding period was advertised a few days ago, and the first projects are set to start at the beginning of 2020.

For more information go to the Nano Security: From Nano-Electronics to Secure Systems webpage on the University of Stuttgart website.

Two approaches to memristors

Within one day of each other in October 2018, two different teams working on memristors with applications to neuroprosthetics and neuromorphic computing (brainlike computing) announced their results.

Russian team

An October 15, 2018 (?) Lobachevsky University press release (also published on October 15, 2018 on EurekAlert) describes a new approach to memristors,

Biological neurons are coupled unidirectionally through a special junction called a synapse. An electrical signal is transmitted along a neuron after some biochemical reactions initiate a chemical release to activate an adjacent neuron. These junctions are crucial for cognitive functions, such as perception, learning and memory.

A group of researchers from Lobachevsky University in Nizhny Novgorod investigates the dynamics of an individual memristive device when it receives a neuron-like signal as well as the dynamics of a network of analog electronic neurons connected by means of a memristive device. According to Svetlana Gerasimova, junior researcher at the Physics and Technology Research Institute and at the Neurotechnology Department of Lobachevsky University, this system simulates the interaction between synaptically coupled brain neurons while the memristive device imitates a neuron axon.

A memristive device is a physical model of Chua’s [Dr. Leon Chua, University of California at Berkeley; see my May 9, 2008 posting for a brief description Dr. Chua’s theory] memristor, which is an electric circuit element capable of changing its resistance depending on the electric signal received at the input. The device based on a Au/ZrO2(Y)/TiN/Ti structure demonstrates reproducible bipolar switching between the low and high resistance states. Resistive switching is determined by the oxidation and reduction of segments of conducting channels (filaments) in the oxide film when voltage with different polarity is applied to it. In the context of the present work, the ability of a memristive device to change conductivity under the action of pulsed signals makes it an almost ideal electronic analog of a synapse.

Lobachevsky University scientists and engineers supported by the Russian Science Foundation (project No.16-19-00144) have experimentally implemented and theoretically described the synaptic connection of neuron-like generators using the memristive interface and investigated the characteristics of this connection.

“Each neuron is implemented in the form of a pulse signal generator based on the FitzHugh-Nagumo model. This model provides a qualitative description of the main neurons’ characteristics: the presence of the excitation threshold, the presence of excitable and self-oscillatory regimes with the possibility of a changeover. At the initial time moment, the master generator is in the self-oscillatory mode, the slave generator is in the excitable mode, and the memristive device is used as a synapse. The signal from the master generator is conveyed to the input of the memristive device, the signal from the output of the memristive device is transmitted to the input of the slave generator via the loading resistance. When the memristive device switches from a high resistance to a low resistance state, the connection between the two neuron-like generators is established. The master generator goes into the oscillatory mode and the signals of the generators are synchronized. Different signal modulation mode synchronizations were demonstrated for the Au/ZrO2(Y)/TiN/Ti memristive device,” – says Svetlana Gerasimova.

UNN researchers believe that the next important stage in the development of neuromorphic systems based on memristive devices is to apply such systems in neuroprosthetics. Memristive systems will provide a highly efficient imitation of synaptic connection due to the stochastic nature of the memristive phenomenon and can be used to increase the flexibility of the connections for neuroprosthetic purposes. Lobachevsky University scientists have vast experience in the development of neurohybrid systems. In particular, a series of experiments was performed with the aim of connecting the FitzHugh-Nagumo oscillator with a biological object, a rat brain hippocampal slice. The signal from the electronic neuron generator was transmitted through the optic fiber communication channel to the bipolar electrode which stimulated Schaffer collaterals (axons of pyramidal neurons in the CA3 field) in the hippocampal slices. “We are going to combine our efforts in the design of artificial neuromorphic systems and our experience of working with living cells to improve flexibility of prosthetics,” concludes S. Gerasimova.

The results of this research were presented at the 38th International Conference on Nonlinear Dynamics (Dynamics Days Europe) at Loughborough University (Great Britain).

This diagram illustrates an aspect of the work,

Caption: Schematic of electronic neurons coupling via a memristive device. Credit: Lobachevsky University

US team

The American Institute of Physics (AIP) announced the publication of a ‘memristor paper’ by a team from the University of Southern California (USC) in an October 16, 2018 news item on phys.org,

Just like their biological counterparts, hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse, with some connections strengthening at the expense of others. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level.

An October 16, 2018 AIP news release (also on EurekAlert), which originated the news item, delves further into the particulars of this particular piece of memristor research,

A group of researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. Publishing its work in the Journal of Applied Physics, from AIP Publishing, the group’s compound synapse is constructed with atomically thin boron nitride memristors running in parallel to ensure efficiency and accuracy.

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“There’s a lot of interest in using new types of materials for memristors,” said Ivan Sanchez Esqueda, an author on the paper. “What we’re showing is that filamentary devices can work well for neuromorphic computing applications, when constructed in new clever ways.”

Current memristor technology suffers from a wide variation in how signals are stored and read across devices, both for different types of memristors as well as different runs of the same memristor. To overcome this, the researchers ran several memristors in parallel. The combined output can achieve accuracies up to five times those of conventional devices, an advantage that compounds as devices become more complex.

The choice to go to the subnanometer level, Sanchez said, was born out of an interest to keep all of these parallel memristors energy-efficient. An array of the group’s memristors were found to be 10,000 times more energy-efficient than memristors currently available.

“It turns out if you start to increase the number of devices in parallel, you can see large benefits in accuracy while still conserving power,” Sanchez said. Sanchez said the team next looks to further showcase the potential of the compound synapses by demonstrating their use completing increasingly complex tasks, such as image and pattern recognition.

Here’s an image illustrating the parallel artificial synapses,

Caption: Hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level. Researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. They discuss their work in this week’s Journal of Applied Physics. This image shows a conceptual schematic of the 3D implementation of compound synapses constructed with boron nitride oxide (BNOx) binary memristors, and the crossbar array with compound BNOx synapses for neuromorphic computing applications. Credit: Ivan Sanchez Esqueda

Here’s a link to and a citation for the paper,

Efficient learning and crossbar operations with atomically-thin 2-D material compound synapses by Ivan Sanchez Esqueda, Huan Zhao and Han Wang. The article will appear in the Journal of Applied Physics Oct. 16, 2018 (DOI: 10.1063/1.5042468).

This paper is behind a paywall.

*Title corrected from ‘Two approaches to memristors featuring’ to ‘Two approaches to memristors’ on May 31, 2019 at 1455 hours PDT.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

New breed of memristors?

This new ‘breed’ of memristor (a component in brain-like/neuromorphic computing) is a kind of thin film. First, here’s an explanation of neuromorphic computing from the Finnish researchers looking into a new kind of memristor, from a January 10, 2018 news item on Nanowerk,

The internet of things [IOT] is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device – a smart watch, a cleaning robot, or a driverless car – can produce gigabytes of data each day, whereas an airbus may have over 10 000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University, along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphic” computers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

An Aalto University January 10, 2018 press release, which originated the news item, provides more detail about the work,

“The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

Basic components for computers that work like the brain

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions”, that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials – including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them – and all our computers – a serious environmental hazard.

“Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air”, explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states – 0 and 1 – but a large number of intermediate states as well. This allows them to ‘memorise’ information not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received – even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains.  Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

“What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.

The probe-station device (the full instrument, left, and a closer view of the device connection, right) which measures the electrical responses of the basic components for computers mimicking the human brain. The tunnel junctions are on a thin film on the substrate plate. Photo: Tapio Reinekoski

Here’s a link to and a citation for the paper,

Electrode Dependence of Tunneling Electroresistance and Switching Stability in Organic Ferroelectric P(VDF-TrFE)-Based Tunnel Junctions by Sayani Majumdar, Binbin Chen, Qi Hang Qin, Himadri S. Majumdar, and Sebastiaan van Dijken. Advanced Functional Materials Vol. 28 Issue 2 DOI: 10.1002/adfm.201703273 Version of Record online: 27 NOV 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Memristors at Masdar

The Masdar Institute of Science and Technology (Abu Dhabi, United Arab Emirates; Masdar Institute Wikipedia entry) featured its work with memristors in an Oct. 1, 2017 Masdar Institute press release by Erica Solomon (for anyone who’s interested, I have a simple description of memristors and links to more posts about them after the press release),

Researchers Develop New Memristor Prototype Capable of Performing Complex Operations at High-Speed and Low Power, Could Lead to Advancements in Internet of Things, Portable Healthcare Sensing and other Embedded Technologies

Computer circuits in development at the Khalifa University of Science and Technology could make future computers much more compact, efficient and powerful thanks to advancements being made in memory technologies that combine processing and memory storage functions into one densely packed “memristor.”

Enabling faster, smaller and ultra-low-power computers with memristors could have a big impact on embedded technologies, which enable Internet of Things (IoT), artificial intelligence, and portable healthcare sensing systems, says Dr. Baker Mohammad, Associate Professor of Electrical and Computer Engineering. Dr. Mohammad co-authored a book on memristor technologies, which has just been released by Springer, a leading global scientific publisher of books and journals, with Class of 2017 PhD graduate Heba Abunahla. The book, titled Memristor Technology: Synthesis and Modeling for Sensing and Security Applications, provides readers with a single-source guide to fabricate, characterize and model memristor devices for sensing applications.

The pair also contributed to a paper on memristor research that was published in IEEE Transactions on Circuits and Systems I: Regular Papers earlier this month with Class of 2017 MSc graduate Muath Abu Lebdeh and Dr. Mahmoud Al-Qutayri, Professor of Electrical and Computer Engineering.PhD student Yasmin Halawani is also an active member of Dr. Mohammad’s research team.

Conventional computers rely on energy and time-consuming processes to move information back and forth between the computer central processing unit (CPU) and the memory, which are separately located. A memristor, which is an electrical resistor that remembers how much current flows through it, can bridge the gap between computation and storage. Instead of fetching data from the memory and sending that data to the CPU where it is then processed, memristors have the potential to store and process data simultaneously.

“Memristors allow computers to perform many operations at the same time without having to move data around, thereby reducing latency, energy requirements, costs and chip size,” Dr. Mohammad explained. “We are focused on extending the logic gate design of the current memristor architecture with one that leads to even greater reduction of latency, energy dissipation and size.”

Logic gates control an electronics logical operation on one or more binary inputs and typically produce a single binary output. That is why they are at the heart of what makes a computer work, allowing a CPU to carry out a given set of instructions, which are received as electrical signals, using one or a combination of the seven basic logical operations: AND, OR, NOT, XOR, XNOR, NAND and NOR.

The team’s latest work is aimed at advancing a memristor’s ability to perform a complex logic operation, known as the XNOR (Exclusive NOR) logic gate function, which is the most complex logic gate operation among the seven basic logic gates types.

Designing memristive logic gates is difficult, as they require that each electrical input and output be in the form of electrical resistance rather than electrical voltage.

“However, we were able to successfully design an XNOR logic gate prototype with a novel structure, by layering bipolar and unipolar memristor types in a novel heterogeneous structure, which led to a reduction in latency and energy consumption for a memristive XNOR logic circuit gate by 50% compared to state-of the art state full logic proposed by leading research institutes,” Dr. Mohammad revealed.

The team’s current work builds on five years of research in the field of memristors, which is expected to reach a market value of US$384 million by 2025, according to a recent report from Research and Markets. Up to now, the team has fabricated and characterized several memristor prototypes, assessing how different design structures influence efficiency and inform potential applications. Some innovative memristor technology applications the team discovered include machine vision, radiation sensing and diabetes detection. Two patents have already been issued by the US Patents and Trademark Office (USPTO) for novel memristor designs invented by the team, with two additional patents pending.

Their robust research efforts have also led to the publication of several papers on the technology in high impact journals, including The Journal of Physical Chemistry, Materials Chemistry and Physics, and IEEE TCAS. This strong technology base paved the way for undergraduate senior students Reem Aldahmani, Amani Alshkeili, and Reem Jassem Jaffar to build novel and efficient memristive sensing prototypes.

The memristor research is also set to get an additional boost thanks to the new University merger, which Dr. Mohammad believes could help expedite the team’s research and development efforts through convenient and continuous access to the wider range of specialized facilities and tools the new university has on offer.

The team’s prototype memristors are now in the laboratory prototype stage, and Dr. Mohammad plans to initiate discussions for internal partnership opportunities with the Khalifa University Robotics Institute, followed by external collaboration with leading semiconductor companies such as Abu Dhabi-owned GlobalFoundries, to accelerate the transfer of his team’s technology to the market.

With initial positive findings and the promise of further development through the University’s enhanced portfolio of research facilities, this project is a perfect demonstration of how the Khalifa University of Science and Technology is pushing the envelope of electronics and semiconductor technologies to help transform Abu Dhabi into a high-tech hub for research and entrepreneurship.

h/t Oct. 4, 2017 Nanowerk news item

Slightly restating it from the press release, a memristor is a nanoscale electrical component which mimics neural plasticity. Memristor combines the word ‘memory’ with ‘resistor’.

For those who’d like a little more, there are three components: capacitors, inductors, and resistors which make up an electrical circuit. The resistor is the circuit element which represents the resistance to the flow of electric current.  As for how this relates to the memristor (from the Memristor Wikipedia entry; Note: Links have been removed),

The memristor’s electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property.[2] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again

The memristor could lead to more energy-saving devices but much of the current (pun noted) interest lies in its similarity to neural plasticity and its potential application on neuromorphic engineering (brainlike computing).

Here’s a sampling of some of the more recent memristor postings on this blog:

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.