Tag Archives: Wei Lu

A nonvolatile photo-memristor

Credit: by Xiao Fu, Tangxin Li, Bin Caid, Jinshui Miao, Gennady N. Panin, Xinyu Ma, Jinjin Wang, Xiaoyong Jiang, Qing Lia, Yi Dong, Chunhui Hao, Juyi Sun, Hangyu Xu, Qixiao Zhao, Mengjia Xia, Bo Song, Fansheng Chen, Xiaoshuang Chen, Wei Lu, Weida Hu

it took a while to get there but the February 13, 2023 news item on phys.org announced research into extending memristors from tunable conductance to reconfigurable photo-response,

In traditional vision systems, the optical information is captured by a frame-based digital camera, and then the digital signal is processed afterwards using machine-learning algorithms. In this scenario, a large amount of data (mostly redundant) has to be transferred from a standalone sensing elements to the processing units, which leads to high latency and power consumption.

To address this problem, much effort has been devoted to developing an efficient approach, where some of the memory and computational tasks are offloaded to sensor elements that can perceive and process the optical signal simultaneously.

In a new paper published in Light: Science & Applications, a team of scientists, led by Professor Weida Hu from School of Physics and Optoelectronic Engineering, Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou, China, State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China, and co-workers have developed a non-volatile photo-memristor, in which the reconfigurable responsivity can be modulated by the charge and/or photon flux through it and further stored in the device.

A February 13, 2023 Chinese Academy of Sciences press release, which originated the news item, provided more technical detail about the work,

The non-volatile photo-memristor has a simple two-terminal architecture, in which photoexcited carriers and oxygen-related ions are coupled, leading to a displaced and pinched hysteresis in the current-voltage characteristics. For the first time, non-volatile photo-memristors implement computationally complete logic with photoresponse-stateful operations, for which the same photo-memristor serves as both a logic gate and memory, using photoresponse as a physical state variable instead of light, voltage and memresistance. Polarity reversal of photo-memristors shows great potential for in-memory sensing and computing with feature extraction and image recognition for neuromorphic vision.

The photo-memristor demonstrates tunable short-circuit current in a non-volatile mode under illumination. By mimicking the biological functionalities of the human retina and designing specific device structures, the devices can act as neural network for neuromorphic visual processing and implementation of completely photoresponse-stateful logic operations triggered by electrical and light stimuli together. It can support various kinds of sensing tasks with all-in-one sensing-memory-computing approaches. These scientists summarize the operational principle and feature of their device:

“We design[ed] a two-terminal device with MoS2-xOx and specific graphene for three purposes in one: (1) to provide low barrier energy for the migration of oxygen ions; (2) to perform as geometry-asymmetric metal–semiconductor–metal van der Waals heterostructures with multi-photoresponse states; and (3) as an extension of a memristor, this device not only provides tunable conductance, but also demonstrates reconfigurable photoresponse for reading at zero bias voltage.”

“Moreover, the tunable short-circuit photocurrent and photoresponse can be increased to 889.8 nA and 98.8 mA/W, respectively, which are much higher than that of other reconfigurable phototransistors based on 2D materials. To reverse the channel polarity and obtain a gate-tunable short-circuit photocurrent, the channel semiconductor must be thin enough. Thus, it is difficult to use the thick film needed to absorb enough light to get a large signal. In our case, the mechanism of the two-terminal device rearrangement is based on ion migration, which is not limited by the thickness. We can increase the thickness of the film to absorb more photons and get a large short-circuit photocurrent.” they added.

“This new concept of a two-terminal photo-memristor not only enables all-in-one sensing-memory-computing approaches for neuromorphic vision hardware, but also brings great convenience for high-density integration.” the scientists forecast.

Here’s a link to and a citation for the paper,

Graphene/MoS2−xOx/graphene photomemristor with tunable non-volatile responsivities for neuromorphic vision processing by Xiao Fu, Tangxin Li, Bin Caid, Jinshui Miao, Gennady N. Panin, Xinyu Ma, Jinjin Wang, Xiaoyong Jiang, Qing Lia, Yi Dong, Chunhui Hao, Juyi Sun, Hangyu Xu, Qixiao Zhao, Mengjia Xia, Bo Song, Fansheng Chen, Xiaoshuang Chen, Wei Lu, Weida Hu. Light: Science & Applications volume 12, Article number: 39 (2023) DOI: https://doi.org/10.1038/s41377-023-01079-5 Published: 07 February 2023

This paper is open access.

Will you be my friend? Yes, after we activate our ultraminiature, wireless, battery-free, fully implantable devices

Perhaps I’m the only one who’s disconcerted?

Here’s the research (in text form) as to why we’re watching these scampering, momentary mouse friends, from a May 10, 2021 Northwestern University news release (also on EurekAlert) by Amanda Morris,

Northwestern University researchers are building social bonds with beams of light.

For the first time ever, Northwestern engineers and neurobiologists have wirelessly programmed — and then deprogrammed — mice to socially interact with one another in real time. The advancement is thanks to a first-of-its-kind ultraminiature, wireless, battery-free and fully implantable device that uses light to activate neurons.

This study is the first optogenetics (a method for controlling neurons with light) paper exploring social interactions within groups of animals, which was previously impossible with current technologies.

The research was published May 10 [2021] in the journal Nature Neuroscience.

The thin, flexible, wireless nature of the implant allows the mice to look normal and behave normally in realistic environments, enabling researchers to observe them under natural conditions. Previous research using optogenetics required fiberoptic wires, which restrained mouse movements and caused them to become entangled during social interactions or in complex environments.

“With previous technologies, we were unable to observe multiple animals socially interacting in complex environments because they were tethered,” said Northwestern neurobiologist Yevgenia Kozorovitskiy, who designed the experiment. “The fibers would break or the animals would become entangled. In order to ask more complex questions about animal behavior in realistic environments, we needed this innovative wireless technology. It’s tremendous to get away from the tethers.”

“This paper represents the first time we’ve been able to achieve wireless, battery-free implants for optogenetics with full, independent digital control over multiple devices simultaneously in a given environment,” said Northwestern bioelectronics pioneer John A. Rogers, who led the technology development. “Brain activity in an isolated animal is interesting, but going beyond research on individuals to studies of complex, socially interacting groups is one of the most important and exciting frontiers in neuroscience. We now have the technology to investigate how bonds form and break between individuals in these groups and to examine how social hierarchies arise from these interactions.”

Kozorovitskiy is the Soretta and Henry Shapiro Research Professor of Molecular Biology and associate professor of neurobiology in Northwestern’s Weinberg College of Arts and Sciences. She also is a member of the Chemistry of Life Processes Institute. Rogers is the Louis Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery in the McCormick School of Engineering and Northwestern University Feinberg School of Medicine and the director of the Querrey Simpson Institute for Bioelectronics.

Kozorovitskiy and Rogers led the work with Yonggang Huang, the Jan and Marcia Achenbach Professor in Mechanical Engineering at McCormick, and Zhaoqian Xie, a professor of engineering mechanics at Dalian University of Technology in China. The paper’s co-first authors are Yiyuan Yang, Mingzheng Wu and Abraham Vázquez-Guardado — all at Northwestern.

Promise and problems of optogenetics

Because the human brain is a system of nearly 100 billion intertwined neurons, it’s extremely difficult to probe single — or even groups of — neurons. Introduced in animal models around 2005, optogenetics offers control of specific, genetically targeted neurons in order to probe them in unprecedented detail to study their connectivity or neurotransmitter release. Researchers first modify neurons in living mice to express a modified gene from light-sensitive algae. Then they can use external light to specifically control and monitor brain activity. Because of the genetic engineering involved, the method is not yet approved in humans.

“It sounds like sci-fi, but it’s an incredibly useful technique,” Kozorovitskiy said. “Optogenetics could someday soon be used to fix blindness or reverse paralysis.”

Previous optogenetics studies, however, were limited by the available technology to deliver light. Although researchers could easily probe one animal in isolation, it was challenging to simultaneously control neural activity in flexible patterns within groups of animals interacting socially. Fiberoptic wires typically emerged from an animal’s head, connecting to an external light source. Then a software program could be used to turn the light off and on, while monitoring the animal’s behavior.

“As they move around, the fibers tugged in different ways,” Rogers said. “As expected, these effects changed the animal’s patterns of motion. One, therefore, has to wonder: What behavior are you actually studying? Are you studying natural behaviors or behaviors associated with a physical constraint?”

Wireless control in real time

A world-renowned leader in wireless, wearable technology, Rogers and his team developed a tiny, wireless device that gently rests on the skull’s outer surface but beneath the skin and fur of a small animal. The half-millimeter-thick device connects to a fine, flexible filamentary probe with LEDs on the tip, which extend down into the brain through a tiny cranial defect.

The miniature device leverages near-field communication protocols, the same technology used in smartphones for electronic payments. Researchers wirelessly operate the light in real time with a user interface on a computer. An antenna surrounding the animals’ enclosure delivers power to the wireless device, thereby eliminating the need for a bulky, heavy battery.

Activating social connections

To establish proof of principle for Rogers’ technology, Kozorovitskiy and colleagues designed an experiment to explore an optogenetics approach to remote-control social interactions among pairs or groups of mice.

When mice were physically near one another in an enclosed environment, Kozorovitskiy’s team wirelessly synchronously activated a set of neurons in a brain region related to higher order executive function, causing them to increase the frequency and duration of social interactions. Desynchronizing the stimulation promptly decreased social interactions in the same pair of mice. In a group setting, researchers could bias an arbitrarily chosen pair to interact more than others.

“We didn’t actually think this would work,” Kozorovitskiy said. “To our knowledge, this is the first direct evaluation of a major long-standing hypothesis about neural synchrony in social behavior.”

Here’s a citation and a link to the paper,

Wireless multilateral devices for optogenetic studies of individual and social behaviors by Yiyuan Yang, Mingzheng Wu, Amy J. Wegener, Jose G. Grajales-Reyes, Yujun Deng, Taoyi Wang, Raudel Avila, Justin A. Moreno, Samuel Minkowicz, Vasin Dumrongprechachan, Jungyup Lee, Shuangyang Zhang, Alex A. Legaria, Yuhang Ma, Sunita Mehta, Daniel Franklin, Layne Hartman, Wubin Bai, Mengdi Han, Hangbo Zhao, Wei Lu, Yongjoon Yu, Xing Sheng, Anthony Banks, Xinge Yu, Zoe R. Donaldson, Robert W. Gereau IV, Cameron H. Good, Zhaoqian Xie, Yonggang Huang, Yevgenia Kozorovitskiy and John A. Rogers. Nature Neuroscience (2021)
DOI: https://doi.org/10.1038/s41593-021-00849-x Published 10 May 2021

This paper is behind a paywall.

This latest research seems to be the continuation of research featured here in a July 16, 2019 posting: “Controlling neurons with light: no batteries or wires needed.”

Memristors with better mimicry of synapses

It seems to me it’s been quite a while since I’ve stumbled across a memristor story from the University of Micihigan but it was worth waiting for. (Much of the research around memristors has to do with their potential application in neuromorphic (brainlike) computers.) From a December 17, 2018 news item on ScienceDaily,

A new electronic device developed at the University of Michigan can directly model the behaviors of a synapse, which is a connection between two neurons.

For the first time, the way that neurons share or compete for resources can be explored in hardware without the need for complicated circuits.

“Neuroscientists have argued that competition and cooperation behaviors among synapses are very important. Our new memristive devices allow us to implement a faithful model of these behaviors in a solid-state system,” said Wei Lu, U-M professor of electrical and computer engineering and senior author of the study in Nature Materials.

A December 17, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, provides an explanation of memristors and their ‘similarity’ to synapses while providing more details about this latest research,

Memristors are electrical resistors with memory–advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. They could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning.

The memristor is a good model for a synapse. It mimics the way that the connections between neurons strengthen or weaken when signals pass through them. But the changes in conductance typically come from changes in the shape of the channels of conductive material within the memristor. These channels–and the memristor’s ability to conduct electricity–could not be precisely controlled in previous devices.

Now, the U-M team has made a memristor in which they have better command of the conducting pathways.They developed a new material out of the semiconductor molybdenum disulfide–a “two-dimensional” material that can be peeled into layers just a few atoms thick. Lu’s team injected lithium ions into the gaps between molybdenum disulfide layers.
They found that if there are enough lithium ions present, the molybdenum sulfide transforms its lattice structure, enabling electrons to run through the film easily as if it were a metal. But in areas with too few lithium ions, the molybdenum sulfide restores its original lattice structure and becomes a semiconductor, and electrical signals have a hard time getting through.

The lithium ions are easy to rearrange within the layer by sliding them with an electric field. This changes the size of the regions that conduct electricity little by little and thereby controls the device’s conductance.

“Because we change the ‘bulk’ properties of the film, the conductance change is much more gradual and much more controllable,” Lu said.

In addition to making the devices behave better, the layered structure enabled Lu’s team to link multiple memristors together through shared lithium ions–creating a kind of connection that is also found in brains. A single neuron’s dendrite, or its signal-receiving end, may have several synapses connecting it to the signaling arms of other neurons. Lu compares the availability of lithium ions to that of a protein that enables synapses to grow.

If the growth of one synapse releases these proteins, called plasticity-related proteins, other synapses nearby can also grow–this is cooperation. Neuroscientists have argued that cooperation between synapses helps to rapidly form vivid memories that last for decades and create associative memories, like a scent that reminds you of your grandmother’s house, for example. If the protein is scarce, one synapse will grow at the expense of the other–and this competition pares down our brains’ connections and keeps them from exploding with signals.
Lu’s team was able to show these phenomena directly using their memristor devices. In the competition scenario, lithium ions were drained away from one side of the device. The side with the lithium ions increased its conductance, emulating the growth, and the conductance of the device with little lithium was stunted.

In a cooperation scenario, they made a memristor network with four devices that can exchange lithium ions, and then siphoned some lithium ions from one device out to the others. In this case, not only could the lithium donor increase its conductance–the other three devices could too, although their signals weren’t as strong.

Lu’s team is currently building networks of memristors like these to explore their potential for neuromorphic computing, which mimics the circuitry of the brain.

Here’s a link to and a citation for the paper,

Ionic modulation and ionic coupling effects in MoS2 devices for neuromorphic computing by Xiaojian Zhu, Da Li, Xiaogan Liang, & Wei D. Lu. Nature Materials (2018) DOI: https://doi.org/10.1038/s41563-018-0248-5 Published 17 December 2018

This paper is behind a paywall.

The researchers have made images illustrating their work available,

A schematic of the molybdenum disulfide layers with lithium ions between them. On the right, the simplified inset shows how the molybdenum disulfide changes its atom arrangements in the presence and absence of the lithium atoms, between a metal (1T’ phase) and semiconductor (2H phase), respectively. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

A diagram of a synapse receiving a signal from one of the connecting neurons. This signal activates the generation of plasticity-related proteins (PRPs), which help a synapse to grow. They can migrate to other synapses, which enables multiple synapses to grow at once. The new device is the first to mimic this process directly, without the need for software or complicated circuits. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.
An electron microscope image showing the rectangular gold (Au) electrodes representing signalling neurons and the rounded electrode representing the receiving neuron. The material of molybdenum disulfide layered with lithium connects the electrodes, enabling the simulation of cooperative growth among synapses. Image credit: Xiaojian Zhu, Nanoelectronics Group, University of Michigan.

That’s all folks.

Bringing memristors to the masses and cutting down on energy use

One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)

In a sense this July 30, 2018 news item on Nanowerk is a return to the beginning,

A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.

“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.

Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.

A July 30, 2018 University of Michigan news release (also on EurekAlert), which originated the news item, expands on the theme,

… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.

Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.

Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.

The memristor array situated on a circuit board.

The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.

Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.

“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.

His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.

When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.

This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.

It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).

Here’s a link and a citation for the paper,

A general memristor-based partial differential equation solver by Mohammed A. Zidan, YeonJoo Jeong, Jihang Lee, Bing Chen, Shuo Huang, Mark J. Kushner & Wei D. Lu. Nature Electronicsvolume 1, pages411–420 (2018) DOI: https://doi.org/10.1038/s41928-018-0100-6 Published: 13 July 2018

This paper is behind a paywall.

For the curious, Dr. Lu’s startup company, Crossbar can be found here.

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Dr. Wei Lu and bio-inspired ‘memristor’ chips

It’s been a while since I’ve featured Dr. Wei Lu’s work here. This April  15, 2010 posting features Lu’s most relevant previous work.) Here’s his latest ‘memristor’ work , from a May 22, 2017 news item on Nanowerk (Note: A link has been removed),

Inspired by how mammals see, a new “memristor” computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today’s most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology (“Sparse coding with memristor networks”).

Lu’s next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called “sparse coding” to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

A May 22, 2017 University of Michigan news release (also on EurekAlert), which originated the news item, provides more information about memristors and about the research,

Memristors are electrical resistors with memory—advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.

“The tasks we ask of today’s computers have grown in complexity,” Lu said. “In this ‘big data’ era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry.”

But like neural networks in a biological brain, networks of memristors can perform many operations at the same time, without having to move data around. As a result, they could enable new platforms that process a vast number of signals in parallel and are capable of advanced machine learning. Memristors are good candidates for deep neural networks, a branch of machine learning, which trains computers to execute processes without being explicitly programmed to do so.

“We need our next-generation electronics to be able to quickly process complex data in a dynamic environment. You can’t just write a program to do that. Sometimes you don’t even have a pre-defined task,” Lu said. “To make our systems smarter, we need to find ways for them to process a lot of data more efficiently. Our approach to accomplish that is inspired by neuroscience.”

A mammal’s brain is able to generate sweeping, split-second impressions of what the eyes take in. One reason is because they can quickly recognize different arrangements of shapes. Humans do this using only a limited number of neurons that become active, Lu says. Both neuroscientists and computer scientists call the process “sparse coding.”

“When we take a look at a chair we will recognize it because its characteristics correspond to our stored mental picture of a chair,” Lu said. “Although not all chairs are the same and some may differ from a mental prototype that serves as a standard, each chair retains some of the key characteristics necessary for easy recognition. Basically, the object is correctly recognized the moment it is properly classified—when ‘stored’ in the appropriate category in our heads.”

Image of a memristor chip Image of a memristor chip Similarly, Lu’s electronic system is designed to detect the patterns very efficiently—and to use as few features as possible to describe the original input.

In our brains, different neurons recognize different patterns, Lu says.

“When we see an image, the neurons that recognize it will become more active,” he said. “The neurons will also compete with each other to naturally create an efficient representation. We’re implementing this approach in our electronic system.”

The researchers trained their system to learn a “dictionary” of images. Trained on a set of grayscale image patterns, their memristor network was able to reconstruct images of famous paintings and photos and other test patterns.

If their system can be scaled up, they expect to be able to process and analyze video in real time in a compact system that can be directly integrated with sensors or cameras.

The project is titled “Sparse Adaptive Local Learning for Sensing and Analytics.” Other collaborators are Zhengya Zhang and Michael Flynn of the U-M Department of Electrical Engineering and Computer Science, Garrett Kenyon of the Los Alamos National Lab and Christof Teuscher of Portland State University.

The work is part of a $6.9 million Unconventional Processing of Signals for Intelligent Data Exploitation project that aims to build a computer chip based on self-organizing, adaptive neural networks. It is funded by the [US] Defense Advanced Research Projects Agency [DARPA].

Here’s a link to and a citation for the paper,

Sparse coding with memristor networks by Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, & Wei D. Lu. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.83 Published online 22 May 2017

This paper is behind a paywall.

For the interested, there are a number of postings featuring memristors here (just use ‘memristor’ as your search term in the blog search engine). You might also want to check out ‘neuromorphic engineeering’ and ‘neuromorphic computing’ and ‘artificial brain’.

Graphene ribbons in solution bending and twisting like DNA

An Aug. 15, 2016 news item on ScienceDaily announces research into graphene nanoribbons and their DNA (deoxyribonucleic acid)-like properties,

Graphene nanoribbons (GNRs) bend and twist easily in solution, making them adaptable for biological uses like DNA analysis, drug delivery and biomimetic applications, according to scientists at Rice University.

Knowing the details of how GNRs behave in a solution will help make them suitable for wide use in biomimetics, according to Rice physicist Ching-Hwa Kiang, whose lab employed its unique capabilities to probe nanoscale materials like cells and proteins in wet environments. Biomimetic materials are those that imitate the forms and properties of natural materials.

An Aug. 15, 2016 Rice University (Texas, US) news release (also on EurekAlert), which originated the news item, describes the ribbons and the research in more detail,

Graphene nanoribbons can be thousands of times longer than they are wide. They can be produced in bulk by chemically “unzipping” carbon nanotubes, a process invented by Rice chemist and co-author James Tour and his lab.

Their size means they can operate on the scale of biological components like proteins and DNA, Kiang said. “We study the mechanical properties of all different kinds of materials, from proteins to cells, but a little different from the way other people do,” she said. “We like to see how materials behave in solution, because that’s where biological things are.” Kiang is a pioneer in developing methods to probe the energy states of proteins as they fold and unfold.

She said Tour suggested her lab have a look at the mechanical properties of GNRs. “It’s a little extra work to study these things in solution rather than dry, but that’s our specialty,” she said.

Nanoribbons are known for adding strength but not weight to solid-state composites, like bicycle frames and tennis rackets, and forming an electrically active matrix. A recent Rice project infused them into an efficient de-icer coating for aircraft.

But in a squishier environment, their ability to conform to surfaces, carry current and strengthen composites could also be valuable.

“It turns out that graphene behaves reasonably well, somewhat similar to other biological materials. But the interesting part is that it behaves differently in a solution than it does in air,” she said. The researchers found that like DNA and proteins, nanoribbons in solution naturally form folds and loops, but can also form helicoids, wrinkles and spirals.

Kiang, Wijeratne [Sithara Wijeratne, Rice graduate now a postdoctoral researcher at Harvard University] and Jingqiang Li, a co-author and student in the Kiang lab, used atomic force microscopy to test their properties. Atomic force microscopy can not only gather high-resolution images but also take sensitive force measurements of nanomaterials by pulling on them. The researchers probed GNRs and their precursors, graphene oxide nanoribbons.

The researchers discovered that all nanoribbons become rigid under stress, but their rigidity increases as oxide molecules are removed to turn graphene oxide nanoribbons into GNRs. They suggested this ability to tune their rigidity should help with the design and fabrication of GNR-biomimetic interfaces.

“Graphene and graphene oxide materials can be functionalized (or modified) to integrate with various biological systems, such as DNA, protein and even cells,” Kiang said. “These have been realized in biological devices, biomolecule detection and molecular medicine. The sensitivity of graphene bio-devices can be improved by using narrow graphene materials like nanoribbons.”

Wijeratne noted graphene nanoribbons are already being tested for use in DNA sequencing, in which strands of DNA are pulled through a nanopore in an electrified material. The base components of DNA affect the electric field, which can be read to identify the bases.

The researchers saw nanoribbons’ biocompatibility as potentially useful for sensors that could travel through the body and report on what they find, not unlike the Tour lab’s nanoreporters that retrieve information from oil wells.

Further studies will focus on the effect of the nanoribbons’ width, which range from 10 to 100 nanometers, on their properties.

Here’s a link to and a citation for the paper,

Detecting the Biopolymer Behavior of Graphene Nanoribbons in Aqueous Solution by Sithara S. Wijeratne, Evgeni S. Penev, Wei Lu, Jingqiang Li, Amanda L. Duque, Boris I. Yakobson, James M. Tour, & Ching-Hwa Kiang. Scientific Reports 6, Article number: 31174 (2016)  doi:10.1038/srep31174 Published online: 09 August 2016

This paper is open access.

A new ink for energy storage devices from the Hong Kong Polytechnic University

Energy storage is not the first thought that leaps to mind when ink is mentioned. Live and learn, eh? A Sept. 23, 2015 news item on Nanowerk describes the connection (Note: A link has been removed),

 The Department of Applied Physics of The Hong Kong Polytechnic University (PolyU) has developed a simple approach to synthesize novel environmentally friendly manganese dioxide ink by using glucose (“Aqueous Manganese Dioxide Ink for Paper-Based Capacitive Energy Storage Devices”).

The MnO2 ink could be used for the production of light, thin, flexible and high performance energy storage devices via ordinary printing or even home-used printers. The capacity of the MnO2 ink supercapacitor is more than 30 times higher than that of a commercial capacitor of the same weight of active material (e.g. carbon powder), demonstrating the great potential of MnO2 ink in significantly enhancing the performances of energy storage devices, whereas its production cost amounts to less than HK$1.

A Sept. 23, 2015 PolyU media release, which originated the news item, expands on the theme,

MnO2 is a kind of environmentally-friendly material and it is degradable. Given the environmental compatibility and high potential capacity of MnO2, it has always been regarded as an ideal candidate for the electrode materials of energy storage devices. The conventional MnO2 electrode preparation methods suffer from high cost, complicated processes and could result in agglomeration of the MnO2 ink during the coating process, leading to the reduction of electrical conductivity. The PolyU research team has developed a simple approach to synthesize aqueous MnO2 ink. Firstly, highly crystalline carbon particles were prepared by microwave hydrothermal method, followed by a morphology transmission mechanism at room temperature. The MnO2 ink can be coated on various substrates, such as conductive paper, plastic and glass. Its thickness and weight can also be controlled for the production of light, thin, transparent and flexible energy storage devices. Substrates coated by MnO2 ink can easily be erased if required, facilitating the fabrication of electronic devices.

PolyU researchers coated the MnO2 ink on conductive A4 paper and fabricated a capacitive energy storage device with maximum energy density and power density amounting to 4 mWh•cm-3 and 13 W•cm-3 respectively. The capacity of the MnO2 ink capacitor is more than 30 times higher than that of a commercial capacitor of the same weight of active material (e.g. carbon powder), demonstrating the great potential of MnO2 ink in significantly enhancing the performances of energy storage devices. Given the small size, light, thin, flexible and high energy capacity properties of the MnO2 ink energy storage device, it shows a potential in wide applications. For instance, in wearable devices and radio-frequency identification systems, the MnO2 ink supercapacitor could be used as the power sources for the flexible and “bendable” display panels, smart textile, smart checkout tags, sensors, luggage tracking tags, etc., thereby contributing to the further development of these two areas.

The related paper has been recently published on Angewandte Chemie International Edition, a leading journal in Chemistry. The research team will work to further improve the performance of the MnO2 ink energy storage device in the coming two years, with special focus on increasing the voltage, optimizing the structure and synthesis process of the device. In addition, further tests will be conducted to integrate the MnO2 ink energy storage device with other energy collection systems.

Here’s a link to and a citation for the paper,

Aqueous Manganese Dioxide Ink for Paper-Based Capacitive Energy Storage Devices by Jiasheng Qian, Huanyu Jin, Dr. Bolei Chen, Mei Lin, Dr. Wei Lu, Dr. Wing Man Tang, Dr. Wei Xiong, Prof. Lai Wa Helen Chan, Prof. Shu Ping Lau, and Dr. Jikang Yuan. Angewandte Chemie International Edition Volume 54, Issue 23, pages 6800–6803, June 1, 2015 DOI: 10.1002/anie.201501261 Article first published online: 17 APR 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

A step closer to artificial synapses courtesy of memristors

Researchers from HRL Laboratories and the University of Michigan have built what they claim is a type of artificial synapse by using memristors. From the March 29, 2012 news item on Nanowerk,

In a step toward computers that mimic the parallel processing of complex biological brains, researchers from HRL Laboratories, LLC, and the University of Michigan have built a type of artificial synapse.

They have demonstrated the first functioning “memristor” array stacked on a conventional complementary metal-oxide semiconductor (CMOS) circuit. Memristors combine the functions of memory and logic like the synapses of biological brains.

The researchers developed a vertically integrated hybrid electronic circuit by combining the novel memristor developed at the University of Michigan with wafer scale heterogeneous process integration methodology and CMOS read/write circuitry developed at HRL. “This hybrid circuit is a critical advance in developing intelligent machines,” said HRL SyNAPSE program manager and principal investigator Narayan Srinivasa. “We have created a multi-bit fully addressable memory storage capability with a density of up to 30 Gbits/cm², which is unprecedented in microelectronics.”

Industry is seeking hybrid systems such as this one, the researchers say. Dubbed “R-RAM,” they could shatter the looming limits of Moore’s Law, which predicts a doubling of transistor density and therefore chip speed every two years.

“We’re reaching the fundamental limits of transistor scaling. This hybrid integration opens many opportunities for greater memory capacity and higher performance of conventional computers.  It has great potential in future non-volatile memory that would improve upon today’s Flash, as well as reconfigurable circuits,” said Wei Lu, an associate professor at the U-M Department of Electrical Engineering and Computer Science whose group developed the memristor array.

This work is being done as part of a DARPA (Defense Advanced Research Projects Agency) project titled, SyNAPSE, from the news item,

The work is part of the Defense Advanced Research Projects Agency’s (DARPA) SyNAPSE Program, or Systems of Neuromorphic Adaptive Plastic Scalable Electronics. Since 2008, the HRL-led SyNAPSE team has been developing a new paradigm for “neuromorphic computing” modeled after biology.

While I haven’t come across HRL Laboratories before, I have mentioned Dr. Wei Lu and his work with memristors in my April 15, 2010 posting. As for HRL Laboratories, they were founded in 1948 by Howard Hughes as the Hughes Research Laboratories (from the company’s History page),

HRL Laboratories continues the legacy of technology advances that began at Hughes Research Laboratories, established by Howard Hughes in 1948. HRL Laboratories, LLC, was organized as a limited liability company (LLC) on December 17, 1997 and received its first patent on September 12, 2000. With more than 750 patents to our name since then and counting, we’re proud of our talented group of researchers, who continue the long tradition of technical excellence in innovation.

First Laser
One of Hughes’ most notable achievements came in 1960 with the demonstration of the world’s first laser which used a synthetic ruby crystal. The ruby laser became the basis of a multibillion-dollar laser range finder business for Hughes. In 2010 during the 50th anniversary of the laser, HRL was designated a Physics Historic Site by the American Physical Society and was selected an IEEE Milestones location as the facility where the first working laser was demonstrated.

HRL has organized its researchers in a number of teams, the one of most interest in this context is the Center for Neural and Emergent Systems,

Part of HRL’s Information and Systems Sciences Laboratory, the Center for Neural and Emergent Systems (CNES) is dedicated to exploring and developing an innovative neural & emergent computing paradigm for creating intelligent, efficient machines that can interact with, react and adapt to, evolve, and learn from their environments.

CNES was founded on the principle that all intelligent systems are open thermodynamic systems capable of self-organization, whereby structural order emerges from disorder as a natural consequence of exchanging energy, matter or entropy with their environments.

These systems exist in a state far from equilibrium where the evolution of complex behaviors cannot be readily predicted from purely local interactions between the system’s parts. Rather, the emergent order and structure of the system arises from manifold interactions of its parts. These emergent systems contain amplifying-damping loops as a result of which very small perturbations can cause large effects or no effect at all. They become adaptive when the component relationships within the system become tuned for a particular set of tasks.

CNES promotes the idea that the neural system in the brain is an example of such a complex adaptive system. A key goal of CNES is to explain how computations in the brain can help explain the realization of complex behaviors such as perception, planning, decision making and navigation due to brain-body-environment interactions.

This has reminded me of HP Labs and their work with memristors (I have many postings, too many to list here) and understand that they will be rolling out ‘memristor-based’ products in 2013. From the  Oct. 8, 2011 article by Peter Clarke for EE Times,

The ‘memristor’ two-terminal non-volatile memory technology, in development at Hewlett Packard Co. since 2008, is on track to be in the market and taking share from flash memory within 18 months, according to Stan Williams, senior fellow at HP Labs.

“We have a lot of big plans for it and we’re working with Hynix Semiconductor to launch a replacement for flash in the summer of 2013 and also to address the solid-state drive market,” Williams told the audience of the International Electronics Forum, being held here [Seville, Spain].

ETA June 11, 2012: New artificial synapse development is mentioned in George Dvorsky’s June 11, 2012 posting (on the IO9.com website) about a nanoscale electrochemical switch developed by researchers in a Japan.