Tag Archives: Intel

Book announcement: Atomistic Simulation of Quantum Transport in Nanoelectronic Devices

For anyone who’s curious about where we go after creating chips at the 7nm size, this may be the book for you. Here’s more from a July 27, 2016 news item on Nanowerk,

In the year 2015, Intel, Samsung and TSMC began to mass-market the 14nm technology called FinFETs. In the same year, IBM, working with Global Foundries, Samsung, SUNY, and various equipment suppliers, announced their success in fabricating 7nm devices. A 7nm silicon channel is about 50 atomic layers and these devices are truly atomic! It is clear that we have entered an era of atomic scale transistors. How do we model the carrier transport in such atomic scale devices?

One way is to improve existing device models by including more and more parameters. This is called the top-down approach. However, as device sizes shrink, the number of parameters grows rapidly, making the top-down approach more and more sophisticated and challenging. Most importantly, to continue Moore’s law, electronic engineers are exploring new electronic materials and new operating mechanisms. These efforts are beyond the scope of well-established device models — hence significant changes are necessary to the top-down approach.

An alternative way is called the bottom-up approach. The idea is to build up nanoelectronic devices atom by atom on a computer, and predict the transport behavior from first principles. By doing so, one is allowed to go inside atomic structures and see what happens from there. The elegance of the approach comes from its unification and generality. Everything comes out naturally from the very basic principles of quantum mechanics and nonequilibrium statistics. The bottom-up approach is complementary to the top-down approach, and is extremely useful for testing innovative ideas of future technologies.

A July 27, 2016 World Scientific news release on EurekAlert, which originated the news item, delves into the topics covered by the book,

In recent decades, several device simulation tools using the bottom-up approach have been developed in universities and software companies. Some examples are McDcal, Transiesta, Atomistic Tool Kit, Smeagol, NanoDcal, NanoDsim, OpenMX, GPAW and NEMO-5. These software tools are capable of predicting electric current flowing through a nanostructure. Essentially the input is the atomic coordinates and the output is the electric current. These software tools have been applied extensively to study emerging electronic materials and devices.

However, developing such a software tool is extremely difficult. It takes years-long experiences and requires knowledge of and techniques in condensed matter physics, computer science, electronic engineering, and applied mathematics. In a library, one can find books on density functional theory, books on quantum transport, books on computer programming, books on numerical algorithms, and books on device simulation. But one can hardly find a book integrating all these fields for the purpose of nanoelectronic device simulation.

“Atomistic Simulation of Quantum Transport in Nanoelectronic Devices” (With CD-ROM) fills the chasm. Authors Yu Zhu and Lei Liu have experience in both academic research and software development. Yu Zhu is the project manager of NanoDsim, and Lei Liu is the project manager of NanoDcal. The content of the book is based Zhu and Liu’s combined R&D experiences of more than forty years.

In this book, the authors conduct an experiment and adopt a “paradigm” approach. Instead of organizing materials by fields, they focus on the development of one particular software tool called NanoDsim, and provide relevant knowledge and techniques whenever needed. The black of box of NanoDsim is opened, and the complete procedure from theoretical derivation, to numerical implementation, all the way to device simulation is illustrated. The affilicated source code of NanoDsim also provides an open platform for new researchers.

I’m not recommending the book as I haven’t read it but it does seem intriguing. For anyone who wishes to purchase it, you can do that here.

I wrote about IBM and its 7nm chip in a July 15, 2015 post.

Short term exposure to engineered nanoparticles used for semiconductors not too risky?

Short term exposure means anywhere from 30 minutes to 48 hours according to the news release and the concentration is much higher than would be expected in current real life conditions. Still, this research from the University of Arizona and collaborators represents an addition to the data about engineered nanoparticles (ENP) and their possible impact on health and safety. From a Feb. 22, 2016 news item on phys.org,

Short-term exposure to engineered nanoparticles used in semiconductor manufacturing poses little risk to people or the environment, according to a widely read research paper from a University of Arizona-led research team.

Co-authored by 27 researchers from eight U.S. universities, the article, “Physical, chemical and in vitro toxicological characterization of nanoparticles in chemical mechanical planarization suspensions used in the semiconductor industry: towards environmental health and safety assessments,” was published in the Royal Society of Chemistry journal Environmental Science Nano in May 2015. The paper, which calls for further analysis of potential toxicity for longer exposure periods, was one of the journal’s 10 most downloaded papers in 2015.

A Feb. 17, 2016 University of Arizona news release (also on EurekAlert), which originated the news item, provides more detail,

“This study is extremely relevant both for industry and for the public,” said Reyes Sierra, lead researcher of the study and professor of chemical and environmental engineering at the University of Arizona.

Small Wonder

Engineered nanoparticles are used to make semiconductors, solar panels, satellites, food packaging, food additives, batteries, baseball bats, cosmetics, sunscreen and countless other products. They also hold great promise for biomedical applications, such as cancer drug delivery systems.

Designing and studying nano-scale materials is no small feat. Most university researchers produce them in the laboratory to approximate those used in industry. But for this study, Cabot Microelectronics provided slurries of engineered nanoparticles to the researchers.

“Minus a few proprietary ingredients, our slurries were exactly the same as those used by companies like Intel and IBM,” Sierra said. Both companies collaborated on the study.

The engineers analyzed the physical, chemical and biological attributes of four metal oxide nanomaterials — ceria, alumina, and two forms of silica — commonly used in chemical mechanical planarization slurries for making semiconductors.

Clean Manufacturing

Chemical mechanical planarization is the process used to etch and polish silicon wafers to be smooth and flat so the hundreds of silicon chips attached to their surfaces will produce properly functioning circuits. Even the most infinitesimal scratch on a wafer can wreak havoc on the circuitry.

When their work is done, engineered nanoparticles are released to wastewater treatment facilities. Engineered nanoparticles are not regulated, and their prevalence in the environment is poorly understood [emphasis mine].

Researchers at the UA and around the world are studying the potential effects of these tiny and complex materials on human health and the environment.

“One of the few things we know for sure about engineered nanoparticles is that they behave very differently than other materials,” Sierra said. “For example, they have much greater surface area relative to their volume, which can make them more reactive. We don’t know whether this greater reactivity translates to enhanced toxicity.”

The researchers exposed the four nanoparticles, suspended in separate slurries, to adenocarcinoma human alveolar basal epithelial cells at doses up to 2,000 milligrams per liter for 24 to 38 hours, and to marine bacteria cells, Aliivibrio fischeri, up to 1,300 milligrams per liter for approximately 30 minutes.

These concentrations are much higher than would be expected in the environment, Sierra said.

Using a variety of techniques, including toxicity bioassays, electron microscopy, mass spectrometry and laser scattering, to measure such factors as particle size, surface area and particle composition, the researchers determined that all four nanoparticles posed low risk to the human and bacterial cells.

“These nanoparticles showed no adverse effects on the human cells or the bacteria, even at very high concentrations,” Sierra said. “The cells showed the very same behavior as cells that were not exposed to nanoparticles.”

The authors recommended further studies to characterize potential adverse effects at longer exposures and higher concentrations.

“Think of a fish in a stream where wastewater containing nanoparticles is discharged,” Sierra said. “Exposure to the nanoparticles could be for much longer.”

Here’s a link to and a citation for the paper,

Physical, chemical, and in vitro toxicological characterization of nanoparticles in chemical mechanical planarization suspensions used in the semiconductor industry: towards environmental health and safety assessments by David Speed, Paul Westerhoff, Reyes Sierra-Alvarez, Rockford Draper, Paul Pantano, Shyam Aravamudhan, Kai Loon Chen, Kiril Hristovski, Pierre Herckes, Xiangyu Bi, Yu Yang, Chao Zeng, Lila Otero-Gonzalez, Carole Mikoryak, Blake A. Wilson, Karshak Kosaraju, Mubin Tarannum, Steven Crawford, Peng Yi, Xitong Liu, S. V. Babu, Mansour Moinpour, James Ranville, Manuel Montano, Charlie Corredor, Jonathan Posner, and Farhang Shadman. Environ. Sci.: Nano, 2015,2, 227-244 DOI: 10.1039/C5EN00046G First published online 14 May 2015

This is open access but you may need to register before reading the paper.

The bit about nanoparticles’ “… prevalence in the environment is poorly understood …”and the focus of this research reminded me of an April 2014 announcement (my April 8, 2014 posting; scroll down about 40% of the way) regarding a new research network being hosted by Arizona State University, the LCnano network, which is part of the Life Cycle of Nanomaterials project being funded by the US National Science Foundation. The network’s (LCnano) director is Paul Westerhoff who is also one of this report’s authors.

Memristor shakeup

New discoveries suggest that memristors do not function as was previously theorized. (For anyone who wants a memristor description, there’s this Wikipedia entry.) From an Oct. 13, 2015 posting by Alexander Hellemans for the Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]), Note: Links have been removed,

What’s going to replace flash? The R&D arms of several companies including Hewlett Packard, Intel, and Samsung think the answer might be memristors (also called resistive RAM, ReRAM, or RRAM). These devices have a chance at unseating the non-volatile memory champion because, they use little energy, are very fast, and retain data without requiring power. However, new research indicates that they don’t work in quite the way we thought they do.

The fundamental mechanism at the heart of how a memristor works is something called an “imperfect point contact,” which was predicted in 1971, long before anybody had built working devices. When voltage is applied to a memristor cell, it reduces the resistance across the device. This change in resistance can be read out by applying another, smaller voltage. By inverting the voltage, the resistance of the device is returned to its initial value, that is, the stored information is erased.

Over the last decade researchers have produced two commercially promising types of memristors: electrochemical metallization memory (ECM) cells, and valence change mechanism memory (VCM) cells.

Now international research teams lead by Ilia Valov at the Peter Grünberg Institute in Jülich, Germany, report in Nature Nanotechnology and Advanced Materials that they have identified new processes that erase many of the differences between EMC and VCM cells.

Valov and coworkers in Germany, Japan, Korea, Greece, and the United States started investigating memristors that had a tantalum oxide electrolyte and an active tantalum electrode. “Our studies show that these two types of switching mechanisms in fact can be bridged, and we don’t have a purely oxygen type of switching as was believed, but that also positive [metal] ions, originating from the active electrode, are mobile,” explains Valov.

Here are links to and citations for both papers,

Graphene-Modified Interface Controls Transition from VCM to ECM Switching Modes in Ta/TaOx Based Memristive Devices by Michael Lübben, Panagiotis Karakolis, Vassilios Ioannou-Sougleridis, Pascal Normand, Pangiotis Dimitrakis, & Ilia Valov. Advanced Materials DOI: 10.1002/adma.201502574 First published: 10 September 2015

Nanoscale cation motion in TaOx, HfOx and TiOx memristive systems by Anja Wedig, Michael Luebben, Deok-Yong Cho, Marco Moors, Katharina Skaja, Vikas Rana, Tsuyoshi Hasegawa, Kiran K. Adepalli, Bilge Yildiz, Rainer Waser, & Ilia Valov. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.221 Published online 28 September 2015

Both papers are behind paywalls.

Intel’s 14nm chip: architecture revealed and scientist discusses the limits to computers

Anxieties about how much longer we can design and manufacture smaller, faster computer chips are commonplace even as companies continue to announce new, faster, smaller chips. Just before the US National Science Foundation (NSF) issued a press release concerning a Nature (journal) essay on the limits of computation, Intel announced a new microarchitecture for its 14nm chips .

First, there’s Intel. In an Aug. 12, 2014 news item on Azonano, Intel announced its newest microarchitecture optimization,

Intel today disclosed details of its newest microarchitecture that is optimized with Intel’s industry-leading 14nm manufacturing process. Together these technologies will provide high-performance and low-power capabilities to serve a broad array of computing needs and products from the infrastructure of cloud computing and the Internet of Things to personal and mobile computing.

An Aug. 11, 2014 Intel news release, which originated the news item, lists key points,

  • Intel disclosed details of the microarchitecture of the Intel® Core™ M processor, the first product to be manufactured using 14nm.
  • The combination of the new microarchitecture and manufacturing process will usher in a wave of innovation in new form factors, experiences and systems that are thinner and run silent and cool.
  • Intel architects and chip designers have achieved greater than two times reduction in the thermal design point when compared to a previous generation of processor while providing similar performance and improved battery life.
  • The new microarchitecture was optimized to take advantage of the new capabilities of the 14nm manufacturing process.
  • Intel has delivered the world’s first 14nm technology in volume production. It uses second-generation Tri-gate (FinFET) transistors with industry-leading performance, power, density and cost per transistor.
  • Intel’s 14nm technology will be used to manufacture a wide range of high-performance to low-power products including servers, personal computing devices and Internet of Things.
  • The first systems based on the Intel® Core™ M processor will be on shelves for the holiday selling season followed by broader OEM availability in the first half of 2015.
  • Additional products based on the Broadwell microarchitecture and 14nm process technology will be introduced in the coming months.

The company has made available supporting materials including videos titled, ‘Advancing Moore’s Law in 2014’, ‘Microscopic Mark Bohr: 14nm Explained’, and ‘Intel 14nm Manufacturing Process’ which can be found here. An earlier mention of Intel and its 14nm manufacturing process can be found in my July 9, 2014 posting.

Meanwhile, in a more contemplative mood, Igor Markov of the University of Michigan has written an essay for Nature questioning the limits of computation as per an Aug. 14, 2014 news item on Azonano,

From their origins in the 1940s as sequestered, room-sized machines designed for military and scientific use, computers have made a rapid march into the mainstream, radically transforming industry, commerce, entertainment and governance while shrinking to become ubiquitous handheld portals to the world.

This progress has been driven by the industry’s ability to continually innovate techniques for packing increasing amounts of computational circuitry into smaller and denser microchips. But with miniature computer processors now containing millions of closely-packed transistor components of near atomic size, chip designers are facing both engineering and fundamental limits that have become barriers to the continued improvement of computer performance.

Have we reached the limits to computation?

In a review article in this week’s issue of the journal Nature, Igor Markov of the University of Michigan reviews limiting factors in the development of computing systems to help determine what is achievable, identifying “loose” limits and viable opportunities for advancements through the use of emerging technologies. His research for this project was funded in part by the National Science Foundation (NSF).

An Aug. 13, 2014 NSF news release, which originated the news item, describes Markov’s Nature essay in greater detail,

“Just as the second law of thermodynamics was inspired by the discovery of heat engines during the industrial revolution, we are poised to identify fundamental laws that could enunciate the limits of computation in the present information age,” says Sankar Basu, a program director in NSF’s Computer and Information Science and Engineering Directorate. “Markov’s paper revolves around this important intellectual question of our time and briefly touches upon most threads of scientific work leading up to it.”

The article summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

“What are these limits, and are some of them negotiable? On which assumptions are they based? How can they be overcome?” asks Markov. “Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”

Limits related to materials and manufacturing are immediately perceptible. In a material layer ten atoms thick, missing one atom due to imprecise manufacturing changes electrical parameters by ten percent or more. Shrinking designs of this scale further inevitably leads to quantum physics and associated limits.

Limits related to engineering are dependent upon design decisions, technical abilities and the ability to validate designs. While very real, these limits are difficult to quantify. However, once the premises of a limit are understood, obstacles to improvement can potentially be eliminated. One such breakthrough has been in writing software to automatically find, diagnose and fix bugs in hardware designs.

Limits related to power and energy have been studied for many years, but only recently have chip designers found ways to improve the energy consumption of processors by temporarily turning off parts of the chip. There are many other clever tricks for saving energy during computation. But moving forward, silicon chips will not maintain the pace of improvement without radical changes. Atomic physics suggests intriguing possibilities but these are far beyond modern engineering capabilities.

Limits relating to time and space can be felt in practice. The speed of light, while a very large number, limits how fast data can travel. Traveling through copper wires and silicon transistors, a signal can no longer traverse a chip in one clock cycle today. A formula limiting parallel computation in terms of device size, communication speed and the number of available dimensions has been known for more than 20 years, but only recently has it become important now that transistors are faster than interconnections. This is why alternatives to conventional wires are being developed, but in the meantime mathematical optimization can be used to reduce the length of wires by rearranging transistors and other components.

Several key limits related to information and computational complexity have been reached by modern computers. Some categories of computational tasks are conjectured to be so difficult to solve that no proposed technology, not even quantum computing, promises consistent advantage. But studying each task individually often helps reformulate it for more efficient computation.

When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it. Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.

What about breakthrough technologies? New techniques and materials can be helpful in several ways and can potentially be “game changers” with respect to traditional limits. For example, carbon nanotube transistors provide greater drive strength and can potentially reduce delay, decrease energy consumption and shrink the footprint of an overall circuit. On the other hand, fundamental limits–sometimes not initially anticipated–tend to obstruct new and emerging technologies, so it is important to understand them before promising a new revolution in power, performance and other factors.

“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”

Here’s a link to and a citation for Markov’s article,

Limits on fundamental limits to computation by Igor L. Markov. Nature 512, 147–154 (14 August 2014) doi:10.1038/nature13570 Published online 13 August 2014

This paper is behind a paywall but a free preview is available via ReadCube Access.

It’s a fascinating question, what are the limits? It’s one being asked not only with regard to computation but also to medicine, human enhancement, and artificial intelligence for just a few areas of endeavour.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.


Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Intel to produce Panasonic SoCs (system-on-chips) using 14nm low-power process

A July 8, 2014 news item on Azonano describes a manufacturing agreement between Intel and Panasonic,

Intel Corporation today announced that it has entered into a manufacturing agreement with Panasonic Corporation’s System LSI Business Division. Intel’s custom foundry business will manufacture future Panasonic system-on-chips (SoCs) using Intel’s 14nm low-power manufacturing process.

Panasonic’s next-generation SoCs will target audio visual-based equipment markets, and will enable higher levels of performance, power and viewing experience for consumers.

A July 7, 2014 Intel press release, which originated the news item, reveals more details,

“Intel’s 14nm Tri-Gate process technology is very important to develop the next- generation SoCs,” said Yoshifumi Okamoto, director, Panasonic Corporation SLSI Business Division. “We will deliver highly improved performance and power advantages with next-generation SoCs by leveraging Intel’s 14nm Tri-Gate process technology through our collaboration.”

Intel’s leading-edge 14nm low-power process technology, which includes the second generation of Tri-Gate transistors, is optimized for low-power applications. This will enable Panasonic’s SoCs to achieve high levels of performance and functionality at lower power levels than was possible with planar transistors.

“We look forward to collaborating with the Panasonic SLSI Business Division,” said Sunit Rikhi, vice president and general manager, Intel Custom Foundry. “We will work hard to deliver the value of power-efficient performance of our 14nm LP process to Panasonic’s next-generation SoCs. This agreement with Panasonic is an important step in the buildup of Intel’s foundry business.”

Five other semiconductor companies have announced agreements with Intel’s custom foundry business, including Altera, Achronix Semiconductor, Tabula, Netronome and Microsemi.

Rick Merritt in a July 7, 2014 article for EE Times provides some insight,

“We are doing extremely well getting customers who can use our technology,” Sunit Rikhi, general manager of Intel’s foundry group, said in a talk at Semicon West, though he would not provide details. …

He suggested that the low-power variant of Intel’s 14nm process is relatively new. Intel uses a general-purpose 22nm process but supports multiple flavors of its 32nm process.

Intel expects to make 10nm chips without extreme ultraviolet (EUV) lithography, he said, reiterating comments from Intel’s Mark Bohr. …

This news provides an update of sorts to my October 21, 2010 posting,

Paul Otellini, Chief Executive Officer of Intel, just announced that the company will invest $6B to $8B for new and upgraded manufacturing facilities to produce 22 nanometre (nm) computer chips.

Now, almost our years later they’re talking about 10 nm chips. I wonder what 2018 will bring?

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Canon-Molecular Imprints deal and its impact on shrinking chips (integrated circuits)

There’s quite an interesting April 20, 2014 essay on Nanotechnology Now which provides some insight into the nanoimprinting market. I recommend reading it but for anyone who is not intimately familiar with the scene, here are a few excerpts along with my attempts to decode this insider’s (from Martini Tech) view,

About two months ago, important news shook the small but lively Japanese nanoimprint community: Canon has decided to acquire, making it a wholly-owned subsidiary, Texas-based Molecular Imprints, a strong player in the nanotechnology industry and one of the main makers of nanoimprint devices such as the Imprio 450 and other models.

So, Canon, a Japanese company, has made a move into the nanoimpriting sector by purchasing Molecular Imprints, a US company based in Texas, outright.

This next part concerns the expiration of Moore’s Law (i.e., every 18 months computer chips get smaller and faster) and is why the major chip makers are searching for new solutions as per the fifth paragraph in this excerpt,

Molecular Imprints` devices are aimed at the IC [integrated circuits, aka chips, I think] patterning market and not just at the relatively smaller applications market to which nanoimprint is usually confined: patterning of bio culture substrates, thin film applications for the solar industry, anti-reflection films for smartphone and LED TV screens, patterning of surfaces for microfluidics among others.

While each one of the markets listed above has the potential of explosive growth in the medium-long term future, at the moment none of them is worth more than a few percentage points, at best, of the IC patterning market.

The mainstream technology behind IC patterning is still optical stepper lithography and the situation is not likely to change in the near term future.

However, optical lithography has its limitations, the main challenge to its 40-year dominance not coming only from technological and engineering issues, but mostly from economical ones.

While from a strictly technological point of view it may still be possible for the major players in the chip industry (Intel, GF, TSMC, Nvidia among others) to go ahead with optical steppers and reach the 5nm node using multi-patterning and immersion, the cost increases associated with each die shrink are becoming staggeringly high.

A top-of-the-notch stepper in the early 90s could have been bought for a few millions of dollars, now the price has increased to some tens of millions for the top machines

The essay describes the market impact this acquisition may have for Canon,

Molecular Imprints has been a company on the forefront of commercialization of nanoimprint-based solutions for IC manufacturing, but so far their solutions have yet to become a viable alternative HVM IC manufacturing market.

The main stumbling blocks for IC patterning using nanoimprint technology are: the occurrence of defects on the mask that inevitably replicates them on each substrate and the lack of alignment precision between the mold and the substrate needed to pattern multi-layered structures.

Therefore, applications for nanoimprint have been limited to markets where no non-periodical structure patterning is needed and where one-layered patterning is sufficient.

But the big market where everyone is aiming for is, of course, IC patterning and this is where much of the R&D effort goes.

While logic patterning with nanoimprint may still be years away, simple patterning of NAND structures may be feasible in the near future, and the purchase of Molecular Imprints by Canon is a step in this direction

Patterning of NAND structures may still require multi-layered structures, but the alignment precision needed is considerably lower than logic.

Moreover, NAND requirements for defectivity are more relaxed than for logic due to the inherent redundancy of the design, therefore, NAND manufacturing is the natural first step for nanoimprint in the IC manufacturing market and, if successful, it may open a whole new range of opportunities for the whole sector.

Assuming I’ve read the rest of this essay rightly, here’s my summary: there are a number of techniques being employed to make chips smaller and more efficient. Canon has purchased a company that is versed in a technique that creates NAND (you can find definitions here) structures in the hope that this technique can be commercialized so that Canon becomes dominant in the sector because (1) they got there first and/or because (2) NAND manufacturing becomes a clear leader, crushing competition from other technologies. This could cover short-term goals and, I imagine Canon hopes, long-term goals.

It was a real treat coming across this essay as it’s an insider’s view. So, thank you to the folks at Martini Tech who wrote this. You can find Molecular Imprints here.

Extending memristive theory

This is kind of fascinating. A German research team based at JARA (Jülich Aachen Research Alliance) is suggesting that memristive theory be extended beyond passive components in their paper about Resistive Memory Cells (ReRAM) which was recently published in Nature Communications. From the Apr. 26, 2013 news item on Azonano,

Resistive memory cells (ReRAM) are regarded as a promising solution for future generations of computer memories. They will dramatically reduce the energy consumption of modern IT systems while significantly increasing their performance.

Unlike the building blocks of conventional hard disk drives and memories, these novel memory cells are not purely passive components but must be regarded as tiny batteries. This has been demonstrated by researchers of Jülich Aachen Research Alliance (JARA), whose findings have now been published in the prestigious journal Nature Communications. The new finding radically revises the current theory and opens up possibilities for further applications. The research group has already filed a patent application for their first idea on how to improve data readout with the aid of battery voltage.

The Apr. 23, 2013 JARA news release, which originated the news item, provides some background information about data memory before going on to discuss the ReRAMs,

Conventional data memory works on the basis of electrons that are moved around and stored. However, even by atomic standards, electrons are extremely small. It is very difficult to control them, for example by means of relatively thick insulator walls, so that information will not be lost over time. This does not only limit storage density, it also costs a great deal of energy. For this reason, researchers are working feverishly all over the world on nanoelectronic components that make use of ions, i.e. charged atoms, for storing data. Ions are some thousands of times heavier that electrons and are therefore much easier to ‘hold down’. In this way, the individual storage elements can almost be reduced to atomic dimensions, which enormously improves the storage density.

Here’s how the ions behave in ReRAMs (from the news release),

In resistive switching memory cells (ReRAMs), ions behave on the nanometre scale in a similar manner to a battery. The cells have two electrodes, for example made of silver and platinum, at which the ions dissolve and then precipitate again. This changes the electrical resistance, which can be exploited for data storage. Furthermore, the reduction and oxidation processes also have another effect. They generate electric voltage. ReRAM cells are therefore not purely passive systems – they are also active electrochemical components. Consequently, they can be regarded as tiny batteries whose properties provide the key to the correct modelling and development of future data storage.

In complex experiments, the scientists from Forschungszentrum Jülich and RWTH Aachen University determined the battery voltage of typical representatives of ReRAM cells and compared them with theoretical values. This comparison revealed other properties (such as ionic resistance) that were previously neither known nor accessible. “Looking back, the presence of a battery voltage in ReRAMs is self-evident. But during the nine-month review process of the paper now published we had to do a lot of persuading, since the battery voltage in ReRAM cells can have three different basic causes, and the assignment of the correct cause is anything but trivial,” says Dr. Ilia Valov, the electrochemist in Prof. Rainer Waser’s research group.

This discovery could lead to optimizing ReRAMs and exploiting them in new applications (from the news release),

“The new findings will help to solve a central puzzle of international ReRAM research,” says Prof. Rainer Waser, deputy spokesman of the collaborative research centre SFB 917 ‘Nanoswitches’ established in 2011. In recent years, these puzzling aspects include unexplained long-term drift phenomena or systematic parameter deviations, which had been attributed to fabrication methods. “In the light of this new knowledge, it is possible to specifically optimize the design of the ReRAM cells, and it may be possible to discover new ways of exploiting the cells’ battery voltage for completely new applications, which were previously beyond the reach of technical possibilities,” adds Waser, whose group has been collaborating for years with companies such as Intel and Samsung Electronics in the field of ReRAM elements.

The part I found most interesting, given my interest in memristors, is this bit about extending the memristor theory, from the news release,

The new finding is of central significance, in particular, for the theoretical description of the memory components. To date, ReRAM cells have been described with the aid of the concept of memristors – a portmanteau word composed of “memory” and “resistor”. The theoretical concept of memristors can be traced back to Leon Chua in the 1970s. It was first applied to ReRAM cells by the IT company Hewlett-Packard in 2008. It aims at the permanent storage of information by changing the electrical resistance. The memristor theory leads to an important restriction. It is limited to passive components. “The demonstrated internal battery voltage of ReRAM elements clearly violates the mathematical construct of the memristor theory. This theory must be expanded to a whole new theory – to properly describe the ReRAM elements,” says Dr. Eike Linn, the specialist for circuit concepts in the group of authors. [emphases mine] This also places the development of all micro- and nanoelectronic chips on a completely new footing.

Here’s a link to and a citation for the paper,

Nanobatteries in redox-based resistive switches require extension of memristor theory by I. Valov,  E. Linn, S. Tappertzhofen,  S. Schmelzer,  J. van den Hurk,  F. Lentz,  & R. Waser. Nature Communications 4, Article number: 1771 doi:10.1038/ncomms2784 Published 23 April 2013

This paper is open access (as of this writing).

Here’s a list of my 2013 postings on memristors and memristive devices,

2.5M Euros for Ireland’s John Boland and his memristive nanowires (Apr. 4, 2013 posting)

How to use a memristor to create an artificial brain (Feb. 26, 2013 posting)

CeNSE (Central Nervous System of the Earth) and billions of tiny sensors from HP plus a memristor update (Feb. 7, 2013 posting)

For anyone who cares to search the blog, there are several more.