Tag Archives: memristors

Memristors, memcapacitors, and meminductors for faster computers

While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).

It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s  June 27, 2014 posting on nanotechweb.org,

Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

Here’s a link to and a citation for the paper,

Dynamic computing random access memory by F L Traversa, F Bonani, Y V Pershin, and M Di Ventra. Nanotechnology Volume 25 Number 28  doi:10.1088/0957-4484/25/28/285201 Published 27 June 2014

This paper is behind a paywall.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Wacky oxide. biological synchronicity, and human brainlike computing

Research out of Pennsylvania State University (Penn State, US) has uncovered another approach  to creating artificial brains (more about the other approaches later in this post), from a May 14, 2014 news item on Science Daily,

Current computing is based on binary logic — zeroes and ones — also called Boolean computing. A new type of computing architecture that stores information in the frequencies and phases of periodic signals could work more like the human brain to do computing using a fraction of the energy of today’s computers.

A May 14, 2014 Pennsylvania State University news release, which originated the news item, describes the research in more detail,

Vanadium dioxide (VO2) is called a “wacky oxide” because it transitions from a conducting metal to an insulating semiconductor and vice versa with the addition of a small amount of heat or electrical current. A device created by electrical engineers at Penn State uses a thin film of VO2 on a titanium dioxide substrate to create an oscillating switch. Using a standard electrical engineering trick, Nikhil Shukla, a Ph.D. student in the group of Professor Suman Datta and co-advised by Professor Roman Engel-Herbert at Penn State, added a series resistor to the oxide device to stabilize their oscillations over billions of cycles. When Shukla added a second similar oscillating system, he discovered that over time the two devices would begin to oscillate in unison. This coupled system could provide the basis for non-Boolean computing. The results are reported in the May 14 [2014] online issue of Nature Publishing Group’s Scientific Reports.

“It’s called a small-world network,” explained Shukla. “You see it in lots of biological systems, such as certain species of fireflies. The males will flash randomly, but then for some unknown reason the flashes synchronize over time.” The brain is also a small-world network of closely clustered nodes that evolved for more efficient information processing.

“Biological synchronization is everywhere,” added Datta, professor of electrical engineering at Penn State and formerly a Principal Engineer in the Advanced Transistor and Nanotechnology Group at Intel Corporation. “We wanted to use it for a different kind of computing called associative processing, which is an analog rather than digital way to compute.” An array of oscillators can store patterns, for instance, the color of someone’s hair, their height and skin texture. If a second area of oscillators has the same pattern, they will begin to synchronize, and the degree of match can be read out. “They are doing this sort of thing already digitally, but it consumes tons of energy and lots of transistors,” Datta said. Datta is collaborating with co-author and Professor of Computer Science and Engineering, Vijay Narayanan, in exploring the use of these coupled oscillations in solving visual recognition problems more efficiently than existing embedded vision processors as part of a National Science Foundation Expedition in Computing program.

Shukla and Datta called on the expertise of Cornell University materials scientist Darrell Schlom to make the VO2 thin film, which has extremely high quality similar to single crystal silicon. Georgia Tech computer engineer Arijit Raychowdhury and graduate student Abhinav Parihar mathematically simulated the nonlinear dynamics of coupled phase transitions in the VO2 devices. Parihar created a short video* simulation of the transitions, which occur at a rate close to a million times per second, to show the way the oscillations synchronize. Penn State professor of materials science and engineering Venkatraman Gopalan used the Advanced Photon Source at Argonne National laboratory to visually characterize the structural changes occurring in the oxide thin film in the midst of the oscillations.

Datta believes it will take seven to ten years to scale up from their current network of two-three coupled oscillators to the 100 million or so closely packed oscillators required to make a neuromorphic computer chip. One of the benefits of the novel device is that it will use only about one percent of the energy of digital computing, allowing for new ways to design computers. Much work remains to determine if VO2 can be integrated into current silicon wafer technology. “It’s a fundamental building block for a different computing paradigm that is analog rather than digital,” Shukla concluded.

There are two papers being published about this work,

Synchronizing a single-electron shuttle to an external drive by Michael J Moeckel, Darren R Southworth, Eva M Weig, and Florian Marquardt. New J. Phys. 16 043009 doi:10.1088/1367-2630/16/4/043009

Synchronized charge oscillations in correlated electron systems by Nikhil Shukla, Abhinav Parihar, Eugene Freeman, Hanjong Paik, Greg Stone, Vijaykrishnan Narayanan, Haidan Wen, Zhonghou Cai, Venkatraman Gopalan, Roman Engel-Herbert, Darrell G. Schlom, Arijit Raychowdhury & Suman Datta. Scientific Reports 4, Article number: 4964 doi:10.1038/srep04964 Published 14 May 2014

Both articles are open access.

Finally, the researchers have provided a video animation illustrating their vanadium dioxide switches in action,

As noted earlier, there are other approaches to creating an artificial brain, i.e., neuromorphic engineering. My April 7, 2014 posting is the most recent synopsis posted here; it includes excerpts from a Nanowerk Spotlight article overview along with a mention of the ‘brain jelly’ approach and a discussion of my somewhat extensive coverage of memristors and a mention of work on nanoionic devices. There is also a published roadmap to neuromorphic engineering featuring both analog and digital devices, mentioned in my April 18, 2014 posting.

Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Brain-on-a-chip 2014 survey/overview

Michael Berger has written another of his Nanowerk Spotlight articles focussing on neuromorphic engineering and the concept of a brain-on-a-chip bringing it up-to-date April 2014 style.

It’s a topic he and I have been following (separately) for years. Berger’s April 4, 2014 Brain-on-a-chip Spotlight article provides a very welcome overview of the international neuromorphic engineering effort (Note: Links have been removed),

Constructing realistic simulations of the human brain is a key goal of the Human Brain Project, a massive European-led research project that commenced in 2013.

The Human Brain Project is a large-scale, scientific collaborative project, which aims to gather all existing knowledge about the human brain, build multi-scale models of the brain that integrate this knowledge and use these models to simulate the brain on supercomputers. The resulting “virtual brain” offers the prospect of a fundamentally new and improved understanding of the human brain, opening the way for better treatments for brain diseases and for novel, brain-like computing technologies.

Several years ago, another European project named FACETS (Fast Analog Computing with Emergent Transient States) completed an exhaustive study of neurons to find out exactly how they work, how they connect to each other and how the network can ‘learn’ to do new things. One of the outcomes of the project was PyNN, a simulator-independent language for building neuronal network models.

Scientists have great expectations that nanotechnologies will bring them closer to the goal of creating computer systems that can simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition while rivaling its low power consumption and compact size – basically a brain-on-a-chip. Already, scientists are working hard on laying the foundations for what is called neuromorphic engineering – a new interdisciplinary discipline that includes nanotechnologies and whose goal is to design artificial neural systems with physical architectures similar to biological nervous systems.

Several research projects funded with millions of dollars are at work with the goal of developing brain-inspired computer architectures or virtual brains: DARPA’s SyNAPSE, the EU’s BrainScaleS (a successor to FACETS), or the Blue Brain project (one of the predecessors of the Human Brain Project) at Switzerland’s EPFL [École Polytechnique Fédérale de Lausanne].

Berger goes on to describe the raison d’être for neuromorphic engineering (attempts to mimic biological brains),

Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications – but useful and practical implementations do not yet exist.

Researchers are mostly interested in emulating neural plasticity (aka synaptic plasticity), from Berger’s April 4, 2014 article,

Independent from military-inspired research like DARPA’s, nanotechnology researchers in France have developed a hybrid nanoparticle-organic transistor that can mimic the main functionalities of a synapse. This organic transistor, based on pentacene and gold nanoparticles and termed NOMFET (Nanoparticle Organic Memory Field-Effect Transistor), has opened the way to new generations of neuro-inspired computers, capable of responding in a manner similar to the nervous system  (read more: “Scientists use nanotechnology to try building computers modeled after the brain”).

One of the key components of any neuromorphic effort, and its starting point, is the design of artificial synapses. Synapses dominate the architecture of the brain and are responsible for massive parallelism, structural plasticity, and robustness of the brain. They are also crucial to biological computations that underlie perception and learning. Therefore, a compact nanoelectronic device emulating the functions and plasticity of biological synapses will be the most important building block of brain-inspired computational systems.

In 2011, a team at Stanford University demonstrates a new single element nanoscale device, based on the successfully commercialized phase change material technology, emulating the functionality and the plasticity of biological synapses. In their work, the Stanford team demonstrated a single element electronic synapse with the capability of both the modulation of the time constant and the realization of the different synaptic plasticity forms while consuming picojoule level energy for its operation (read more: “Brain-inspired computing with nanoelectronic programmable synapses”).

Berger does mention memristors but not in any great detail in this article,

Researchers have also suggested that memristor devices are capable of emulating the biological synapses with properly designed CMOS neuron components. A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. It has the special property that its resistance can be programmed (resistor) and subsequently remains stored (memory).

One research project already demonstrated that a memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems (read more: “Nanotechnology’s road to artificial brains”).

You can find a number of memristor articles here including these: Memristors have always been with us from June 14, 2013; How to use a memristor to create an artificial brain from Feb. 26, 2013; Electrochemistry of memristors in a critique of the 2008 discovery from Sept. 6, 2012; and many more (type ‘memristor’ into the blog search box and you should receive many postings or alternatively, you can try ‘artificial brains’ if you want everything I have on artificial brains).

Getting back to Berger’s April 4, 2014 article, he mentions one more approach and this one stands out,

A completely different – and revolutionary – human brain model has been designed by researchers in Japan who introduced the concept of a new class of computer which does not use any circuit or logic gate. This artificial brain-building project differs from all others in the world. It does not use logic-gate based computing within the framework of Turing. The decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.

Berger wrote about this work in much more detail in a Feb. 10, 2014 Nanowerk Spotlight article titled: Brain jelly – design and construction of an organic, brain-like computer, (Note: Links have been removed),

In a previous Nanowerk Spotlight we reported on the concept of a full-fledged massively parallel organic computer at the nanoscale that uses extremely low power (“Will brain-like evolutionary circuit lead to intelligent computers?”). In this work, the researchers created a process of circuit evolution similar to the human brain in an organic molecular layer. This was the first time that such a brain-like ‘evolutionary’ circuit had been realized.

The research team, led by Dr. Anirban Bandyopadhyay, a senior researcher at the Advanced Nano Characterization Center at the National Institute of Materials Science (NIMS) in Tsukuba, Japan, has now finalized their human brain model and introduced the concept of a new class of computer which does not use any circuit or logic gate.

In a new open-access paper published online on January 27, 2014, in Information (“Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System”), Bandyopadhyay and his team now describe the fundamental computing principle of a frequency fractal brain like computer.

“Our artificial brain-building project differs from all others in the world for several reasons,” Bandyopadhyay explains to Nanowerk. He lists the four major distinctions:
1) We do not use logic gate based computing within the framework of Turing, our decision-making protocol is not a logical reduction of decision rather projection of frequency fractal operations in a real space, it is an engineering perspective of Gödel’s incompleteness theorem.
2) We do not need to write any software, the argument and basic phase transition for decision-making, ‘if-then’ arguments and the transformation of one set of arguments into another self-assemble and expand spontaneously, the system holds an astronomically large number of ‘if’ arguments and its associative ‘then’ situations.
3) We use ‘spontaneous reply back’, via wireless communication using a unique resonance band coupling mode, not conventional antenna-receiver model, since fractal based non-radiative power management is used, the power expense is negligible.
4) We have carried out our own single DNA, single protein molecule and single brain microtubule neurophysiological study to develop our own Human brain model.

I encourage people to read Berger’s articles on this topic as they provide excellent information and links to much more. Curiously (mind you, it is easy to miss something), he does not mention James Gimzewski’s work at the University of California at Los Angeles (UCLA). Working with colleagues from the National Institute for Materials Science in Japan, Gimzewski published a paper about “two-, three-terminal WO3-x-based nanoionic devices capable of a broad range of neuromorphic and electrical functions”. You can find out more about the paper in my Dec. 24, 2012 posting titled: Synaptic electronics.

As for the ‘brain jelly’ paper, here’s a link to and a citation for it,

Design and Construction of a Brain-Like Computer: A New Class of Frequency-Fractal Computing Using Wireless Communication in a Supramolecular Organic, Inorganic System by Subrata Ghoshemail, Krishna Aswaniemail, Surabhi Singhemail, Satyajit Sahuemail, Daisuke Fujitaemail and Anirban Bandyopadhyay. Information 2014, 5(1), 28-100; doi:10.3390/info5010028

It’s an open access paper.

As for anyone who’s curious about why the US BRAIN initiative ((Brain Research through Advancing Innovative Neurotechnologies, also referred to as the Brain Activity Map Project) is not mentioned, I believe that’s because it’s focussed on biological brains exclusively at this point (you can check its Wikipedia entry to confirm).

Anirban Bandyopadhyay was last mentioned here in a January 16, 2014 posting titled: Controversial theory of consciousness confirmed (maybe) in  the context of a presentation in Amsterdam, Netherlands.

Resistive memory from University of California Riverside (replacing flash memory in mobile devices) and Boise State University (neuron chips)

Today, (Aug. 19, 2 013)I have two items on memristors. First, Dexter Johnson provides some context for understanding why a University of California Riverside research team’s approach to creating memristors is exciting some interest in his Aug. 17, 2013 posting (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website), Note: Links have been removed,

The heralding of the memristor, or resistive memory, and the long-anticipated demise of flash memory have both been tracking on opposite trajectories with resistive memory expected to displace flash ever since the memristor was first discovered by Stanley Williams’ group at Hewlett Packard in 2008.

The memristor has been on a rapid development track ever since and has been promised to be commercially available as early as 2014, enabling 10 times greater embedded memory for mobile devices than currently available.

The obsolescence of flash memory at the hands of the latest nanotechnology has been predicted for longer than the commercial introduction of the memristor. But just at the moment it appears it’s going to reach its limits in storage capacity along comes a new way to push its capabilities to new heights, sometimes thanks to a nanomaterial like graphene.

In addition to the graphene promise, Dexter goes on to discuss another development,  which could push memory capabilities and which is mentioned in an Aug. 14, 2013 news item on ScienceDaily (and elsewhere),

A team at the University of California, Riverside Bourns College of Engineering has developed a novel way to build what many see as the next generation memory storage devices for portable electronic devices including smart phones, tablets, laptops and digital cameras.

The device is based on the principles of resistive memory [memristor], which can be used to create memory cells that are smaller, operate at a higher speed and offer more storage capacity than flash memory cells, the current industry standard. Terabytes, not gigbytes, will be the norm with resistive memory.

The key advancement in the UC Riverside research is the creation of a zinc oxide nano-island on silicon. It eliminates the need for a second element called a selector device, which is often a diode.

The Aug. 13, 2013 University of California Riverside news release by Sean Nealon, which originated the news item, further describes the limitations of flash memory and reinforces the importance of being able to eliminate a component (selector device),

Flash memory has been the standard in the electronics industry for decades. But, as flash continues to get smaller and users want higher storage capacity, it appears to reaching the end of its lifespan, Liu [Jianlin Liu, a professor of electrical engineering] said.

With that in mind, resistive memory is receiving significant attention from academia and the electronics industry because it has a simple structure, high-density integration, fast operation and long endurance.

Researchers have also found that resistive memory can be scaled down in the sub 10-nanometer scale. (A nanometer is one-billionth of a meter.) Current flash memory devices are roughly using a feature size twice as large.

Resistive memory usually has a metal-oxide-metal structure in connection with a selector device. The UC Riverside team has demonstrated a novel alternative way by forming self-assembled zinc oxide nano-islands on silicon. Using a conductive atomic force microscope, the researchers observed three operation modes from the same device structure, essentially eliminating the need for a separate selector device.

Here’s a link to and a citation for the researchers’ published paper,

Multimode Resistive Switching in Single ZnO Nanoisland System by Jing Qi, Mario Olmedo, Jian-Guo Zheng, & Jianlin Liu. Scientific Reports 3, Article number: 2405 doi:10.1038/srep02405 Published 12 August 2013

This study is open access.

Meanwhile, Boise State University (Idaho, US) is celebrating a new project, CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors, which was announced in an Aug. 17, 2013 news item on Azonano,

Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.”

Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.

“By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment,” said Barney Smith, who is the principal investigator on the grant.

The Aug. 14, 2013 Boise State University news release by Kathleen Tuck, which originated the news item, describes the team’s focus on mimicking the brain’s capabilities ,

One of the first memristors was built in Campbell’s Boise State lab, which has the distinction of being one of only five or six labs worldwide that are up to the task.

The team’s research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.

“By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information,” said Barney Smith.

Even better, these new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions. This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.

Once the team has successfully built an artificial neural network, they will look to engage neurobiologists in parallel to what they are doing now. A proposal for that could be written in the coming year.

Barney Smith said they hope to send the first of the new neuron chips out for fabrication within weeks.

With the possibility that HP Labs will make its ‘memristor chips‘ commercially available in 2014 and neuron chips fabricated for the Boise State University researchers within weeks of this Aug. 19, 2013 date, it seems that memristors have been developed at a lightning fast pace. It’s been a fascinating process to observe.

Memristors have always been with us

Sprightly, a word not often used in conjunction with technology of any kind,  is the best of way describing the approach that researchers Varun Aggarwal and Gaurav Gandhi, along with Dr. Leon Chua, have taken towards their discovery that memristors are all around us. ( For anyone not familiar with the concept, I suggest reading the Wikipedia essay on memristors as it includes information about the various critiques of the memristor definition, as well as, the definition.)

It was Dexter Johnson in his June 6, 2013 post on the IEEE (Institute of Electrical and Electronics Engineers) Nanoclast blog who alerted me to this latest memristor work (Note: Links have been removed),

Two researchers from mLabs in India, along with Prof. Leon Chua at the University of California Berkeley, who first postulated the memristor in a paper back in 1971, have discovered the simplest physical implementation for the memristor, which can be built by anyone and everyone.

In two separate papers, one published in arXiv (“Bipolar electrical switching in metal-metal contacts”) and the other in the IEEE’s own Circuits and Systems Magazine (“The First Radios Were Made Using Memristors!”), Chua and the researchers, Varun Aggarwal and Gaurav Gandhi, discovered that simple imperfect point contacts all around us act as memristors.

“Our arXiv paper talks about the coherer, which comprises an imperfect metal-metal contact in embodiments such as a point contact between two metallic balls, granular media or a metal-mercury interface,” Gandhi explained to me via e-email. “On the other hand, the CAS paper comprises an imperfect metal-semiconductor contact (Cat’s Whisker) which was also the first solid-state diode. Both the systems have as their signature an imperfect point contact between two conducting/partially-conducting elements. Both act like memristor.”

I’ll get to the articles in a minutes, first let’s look at the researchers’ website, Mlabs home page (splash page). BTW, I have a soft spot for websites that are easy to navigate and don’t irritate me with movement or pop-ups (thank you mLabs). I think this description of the researchers (Aggarwal and Gandhi) and how they came to develop mLabs (excerpted from the About us page) explains why I described their approach as sprightly,

As they say, anything can happen over a cup of coffee and this story is no different! Gaurav and Varun were friends for over a decade, and one fine day they were sitting at a coffee house discussing Gaurav’s trip to the Second Memristor and Memristive Symposium at Berkeley. Gaurav shared the exciting work around memristor that he witnessed at Berkeley. Varun, who has been an evangelist of Jagadish Chandra Bose’s work thought there was some correlation between the research work of Bose and memristor. He convinced Gaurav to look deeper into these aspects. Soon, a plan was put forth, they wore their engineering gloves and mLabs was born. Gaurav quit his job for full time involvement at mLabs, while Varun assisted and advised throughout.

Three years of curiosity, experimentation, discussions and support from various researchers and professors from different parts of the world, led us to where we are today.

We are also sincerely grateful to Prof. Leon Chua for his continuous support, mentorship and indispensable contribution to our work.

As Dexter notes, Aggarwal and Gandhi have written papers about two different ways to create memristors, the arXiv paper, Bipolar electrical switching in metal-metal contacts, describes how corherers could be used to create simple memristors for research purposes. This paper also makes the argument that the memristor is a fundamental circuit (a claim which is a matter of considerable debate as the Wikipedia Memristor essay notes briefly),

Our new results show that bipolar switching can be observed in a large class of metals by a simple construction in form of a point-contact or granular media. It does not require complex construction, particular materials or small geometries. The signature of all our devices is an imperfect metal-metal contact and the physical mechanism for the observed behavior needs to be further studied. That the electrical behavior of these simple, naturally-occurring physical constructs can be modeled by a memristor, but not the other three passive elements, is an indication of its fundamental nature. By providing the canonic physical implementation for memristor, the present work not only lls an important gap in the study of switching devices, but also brings them into the realm of immediate practical use and implementation.

Due to the fact that the second article, the one in the IEEE published Circuits and Systems magazine, is behind a paywall, I can’t do much more than offer the title and the first paragraph,

The First Radios Were Made Using Memristors!

In 2008, Williams et al. reported the discovery of the fourth fundamental passive circuit element, memristor, which exhibits electrically controllable state-dependent resistance [1]. We show that one of the first wireless radio detector, called cat?s whisker, also the world?s first solid-state diode, had memristive properties. We have identified the state variable governing the resistance state of the device and can program it to switch between multiple stable resistance states. Our observations and results are valid for a larger class of devices called coherers, which include the cat?s whisker. These devices constitute the missing canonical physical implementations for a memristor (ref. Fig. 1).

It’s fascinating when you consider that up until now researching memristors meant having high tech equipment. I wonder how many backyard memristor labs are going to spring up?

On a somewhat related note, Dexter mentions that HP Labs ‘memristor’ products will be available in 2014. This latest date represents two postponements. Originally meant to be on the market in the summer of 2013, the new products were then supposed to brought to market in late 2013 as per my Feb. 7, 2013 posting; scroll down about 75% of the way).

Extending memristive theory

This is kind of fascinating. A German research team based at JARA (Jülich Aachen Research Alliance) is suggesting that memristive theory be extended beyond passive components in their paper about Resistive Memory Cells (ReRAM) which was recently published in Nature Communications. From the Apr. 26, 2013 news item on Azonano,

Resistive memory cells (ReRAM) are regarded as a promising solution for future generations of computer memories. They will dramatically reduce the energy consumption of modern IT systems while significantly increasing their performance.

Unlike the building blocks of conventional hard disk drives and memories, these novel memory cells are not purely passive components but must be regarded as tiny batteries. This has been demonstrated by researchers of Jülich Aachen Research Alliance (JARA), whose findings have now been published in the prestigious journal Nature Communications. The new finding radically revises the current theory and opens up possibilities for further applications. The research group has already filed a patent application for their first idea on how to improve data readout with the aid of battery voltage.

The Apr. 23, 2013 JARA news release, which originated the news item, provides some background information about data memory before going on to discuss the ReRAMs,

Conventional data memory works on the basis of electrons that are moved around and stored. However, even by atomic standards, electrons are extremely small. It is very difficult to control them, for example by means of relatively thick insulator walls, so that information will not be lost over time. This does not only limit storage density, it also costs a great deal of energy. For this reason, researchers are working feverishly all over the world on nanoelectronic components that make use of ions, i.e. charged atoms, for storing data. Ions are some thousands of times heavier that electrons and are therefore much easier to ‘hold down’. In this way, the individual storage elements can almost be reduced to atomic dimensions, which enormously improves the storage density.

Here’s how the ions behave in ReRAMs (from the news release),

In resistive switching memory cells (ReRAMs), ions behave on the nanometre scale in a similar manner to a battery. The cells have two electrodes, for example made of silver and platinum, at which the ions dissolve and then precipitate again. This changes the electrical resistance, which can be exploited for data storage. Furthermore, the reduction and oxidation processes also have another effect. They generate electric voltage. ReRAM cells are therefore not purely passive systems – they are also active electrochemical components. Consequently, they can be regarded as tiny batteries whose properties provide the key to the correct modelling and development of future data storage.

In complex experiments, the scientists from Forschungszentrum Jülich and RWTH Aachen University determined the battery voltage of typical representatives of ReRAM cells and compared them with theoretical values. This comparison revealed other properties (such as ionic resistance) that were previously neither known nor accessible. “Looking back, the presence of a battery voltage in ReRAMs is self-evident. But during the nine-month review process of the paper now published we had to do a lot of persuading, since the battery voltage in ReRAM cells can have three different basic causes, and the assignment of the correct cause is anything but trivial,” says Dr. Ilia Valov, the electrochemist in Prof. Rainer Waser’s research group.

This discovery could lead to optimizing ReRAMs and exploiting them in new applications (from the news release),

“The new findings will help to solve a central puzzle of international ReRAM research,” says Prof. Rainer Waser, deputy spokesman of the collaborative research centre SFB 917 ‘Nanoswitches’ established in 2011. In recent years, these puzzling aspects include unexplained long-term drift phenomena or systematic parameter deviations, which had been attributed to fabrication methods. “In the light of this new knowledge, it is possible to specifically optimize the design of the ReRAM cells, and it may be possible to discover new ways of exploiting the cells’ battery voltage for completely new applications, which were previously beyond the reach of technical possibilities,” adds Waser, whose group has been collaborating for years with companies such as Intel and Samsung Electronics in the field of ReRAM elements.

The part I found most interesting, given my interest in memristors, is this bit about extending the memristor theory, from the news release,

The new finding is of central significance, in particular, for the theoretical description of the memory components. To date, ReRAM cells have been described with the aid of the concept of memristors – a portmanteau word composed of “memory” and “resistor”. The theoretical concept of memristors can be traced back to Leon Chua in the 1970s. It was first applied to ReRAM cells by the IT company Hewlett-Packard in 2008. It aims at the permanent storage of information by changing the electrical resistance. The memristor theory leads to an important restriction. It is limited to passive components. “The demonstrated internal battery voltage of ReRAM elements clearly violates the mathematical construct of the memristor theory. This theory must be expanded to a whole new theory – to properly describe the ReRAM elements,” says Dr. Eike Linn, the specialist for circuit concepts in the group of authors. [emphases mine] This also places the development of all micro- and nanoelectronic chips on a completely new footing.

Here’s a link to and a citation for the paper,

Nanobatteries in redox-based resistive switches require extension of memristor theory by I. Valov,  E. Linn, S. Tappertzhofen,  S. Schmelzer,  J. van den Hurk,  F. Lentz,  & R. Waser. Nature Communications 4, Article number: 1771 doi:10.1038/ncomms2784 Published 23 April 2013

This paper is open access (as of this writing).

Here’s a list of my 2013 postings on memristors and memristive devices,

2.5M Euros for Ireland’s John Boland and his memristive nanowires (Apr. 4, 2013 posting)

How to use a memristor to create an artificial brain (Feb. 26, 2013 posting)

CeNSE (Central Nervous System of the Earth) and billions of tiny sensors from HP plus a memristor update (Feb. 7, 2013 posting)

For anyone who cares to search the blog, there are several more.

2.5M Euros for Ireland’s John Boland and his memristive nanowires

The announcement makes no mention of the memristor or neuromorphic engineering but those are the areas in which  John Boland works and the reason for his 2.5M Euro research award. From the Ap. 3, 2013 news item on Nanowerk,

Professor John Boland, Director of CRANN, the SFI-funded [Science Foundation of Ireland] nanoscience institute based at Trinity College Dublin, and a Professor in the School of Chemistry has been awarded a €2.5 million research grant by the European Research Council (ERC). This is the second only Advanced ERC grant ever awarded in Physical Sciences in Ireland.

The Award will see Professor Boland and his team continue world-leading research into how nanowire networks can lead to a range of smart materials, sensors and digital memory applications. The research could result in computer networks that mimic the functions of the human brain and vastly improve on current computer capabilities such as facial recognition.

The University of Dublin’s Trinity College CRANN (Centre for Research on Adaptive Nanostructures and Nanodevices) April 3, 2013 news release, which originated the news item,  provides details about Boland’s proposed nanowire network,

Nanowires are spaghetti like structures, made of materials such as copper or silicon. They are just a few atoms thick and can be readily engineered into tangled networks of nanowires. Researchers worldwide are investigating the possibility that nanowires hold the future of energy production (solar cells) and could deliver the next generation of computers.

Professor Boland has discovered that exposing a random network of nanowires to stimuli like electricity, light and chemicals, generates chemical reaction at the junctions where the nanowires cross. By controlling the stimuli, it is possible to harness these reactions to manipulate the connectivity within the network. This could eventually allow computations that mimic the functions of the nerves in the human brain – particularly the development of associative memory functions which could lead to significant advances in areas such as facial recognition.

Commenting Professor John Boland said, “This funding from the European Research Council allows me to continue my work to deliver the next generation of computing, which differs from the traditional digital approach.  The human brain is neurologically advanced and exploits connectivity that is controlled by electrical and chemical signals. My research will create nanowire networks that have the potential to mimic aspects of the neurological functions of the human brain, which may revolutionise the performance of current day computers.   It could be truly ground-breaking.”

It’s only in the news release’s accompanying video that the memristor and neuromorphic engineering are mentioned,

I have written many times about the memristor, most recently in a Feb. 26, 2013 posting titled, How to use a memristor to create an artificial brain, where I noted a proposed ‘blueprint’ for an artificial brain. A contested concept, the memristor has attracted critical commentary as noted in a Mar. 19, 2013 comment added to the ‘blueprint’  post,

A Sceptic says:

….

Before talking about blueprints, one has to consider that the dynamic state equations describing so-called non-volatile memristors are in conflict with fundamentals of physics. These problems are discussed in:

“Fundamental Issues and Problems in the Realization of Memristors” by P. Meuffels and R. Soni (http://arxiv.org/abs/1207.7319)

“On the physical properties of memristive, memcapacitive, and meminductive systems” by M. Di Ventra and Y. V. Pershin (http://arxiv.org/abs/1302.7063)

How to use a memristor to create an artificial brain

Dr. Andy Thomas of Bielefeld University’s (Germany) Faculty of Physics has developed a ‘blueprint’ for an artificial brain based on memristors. From the Feb. 26, 2013, news item on phys.org,

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the Journal of Physics D: Applied Physics.

The Feb. 26, 2013 University of Bielefeld news release, which originated the news item, describes why memristors are the foundation for Thomas’s proposed artificial brain,

Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.

Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

‘… a memristor can store information more precisely than the bits on which previous computer processors have been based,’ says Thomas. Both a memristor and a bit work with electrical impulses. However, a bit does not allow any fine adjustment – it can only work with ‘on’ and ‘off’. In contrast, a memristor can raise or lower its resistance continuously. ‘This is how memristors deliver a basis for the gradual learning and forgetting of an artificial brain,’ explains Thomas.

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

A nanocomponent that is capable of learning: The Bielefeld memristor built into a chip here is 600 times thinner than a human hair. [ downloaded from http://ekvv.uni-bielefeld.de/blog/uninews/entry/blueprint_for_an_artificial_brain]

Here’s a citation for and link to the paper (from the university news release),

Andy Thomas, ‘Memristor-based neural networks’, Journal of Physics D: Applied Physics, http://dx.doi.org/10.1088/0022-3727/46/9/093001, released online on 5 February 2013, published in print on 6 March 2013.

This paper is available until March 5, 2013 as IOP Science (publisher of Journal Physics D: Applied Physics), makes their papers freely available (with some provisos) for the first 30 days after online publication, from the Access Options page for Memristor-based neural networks,

As a service to the community, IOP is pleased to make papers in its journals freely available for 30 days from date of online publication – but only fair use of the content is permitted.

Under fair use, IOP content may only be used by individuals for the sole purpose of their own private study or research. Such individuals may access, download, store, search and print hard copies of the text. Copying should be limited to making single printed or electronic copies.

Other use is not considered fair use. In particular, use by persons other than for the purpose of their own private study or research is not fair use. Nor is altering, recompiling, reselling, systematic or programmatic copying, redistributing or republishing. Regular/systematic downloading of content or the downloading of a substantial proportion of the content is not fair use either.

Getting back to the memristor, I’ve been writing about it for some years, it was most recently mentioned here  in a Feb.7, 2013 posting and I mentioned in a Dec. 24, 2012 posting nanoionic nanodevices  also described as resembling synapses.