Category Archives: electronics

Carbyne stretches from theory to reality and reveals its conundrum-type self

Rice University (Texas, US) scientists have taken a rather difficult material, carbyne, and twisted it to reveal new properties according to a July 21, 2014 news item on ScienceDaily,

Applying just the right amount of tension to a chain of carbon atoms can turn it from a metallic conductor to an insulator, according to Rice University scientists.

Stretching the material known as carbyne — a hard-to-make, one-dimensional chain of carbon atoms — by just 3 percent can begin to change its properties in ways that engineers might find useful for mechanically activated nanoscale electronics and optics.

A July 21, 2014 Rice University news release (also on EurekAlert), which originated the news item, describes carbyne and some of the difficulties the scientists addressed in their research on the material,

Until recently, carbyne has existed mostly in theory, though experimentalists have made some headway in creating small samples of the finicky material. The carbon chain would theoretically be the strongest material ever, if only someone could make it reliably.

The first-principle calculations by Yakobson and his co-authors, Rice postdoctoral researcher Vasilii Artyukhov and graduate student Mingjie Liu, show that stretching carbon chains activates the transition from conductor to insulator by widening the material’s band gap. Band gaps, which free electrons must overcome to complete a circuit, give materials the semiconducting properties that make modern electronics possible.

In their previous work on carbyne, the researchers believed they saw hints of the transition, but they had to dig deeper to find that stretching would effectively turn the material into a switch.

Each carbon atom has four electrons available to form covalent bonds. In their relaxed state, the atoms in a carbyne chain would be more or less evenly spaced, with two bonds between them. But the atoms are never static, due to natural quantum uncertainty, which Yakobson said keeps them from slipping into a less-stable Peierls distortion.

“Peierls said one-dimensional metals are unstable and must become semiconductors or insulators,” Yakobson said. “But it’s not that simple, because there are two driving factors.”

One, the Peierls distortion, “wants to open the gap that makes it a semiconductor.” The other, called zero-point vibration (ZPV), “wants to maintain uniformity and the metal state.”

Yakobson explained that ZPV is a manifestation of quantum uncertainty, which says atoms are always in motion. “It’s more a blur than a vibration,” he said. “We can say carbyne represents the uncertainty principle in action, because when it’s relaxed, the bonds are constantly confused between 2-2 and 1-3, to the point where they average out and the chain remains metallic.”

But stretching the chain shifts the balance toward alternating long and short (1-3) bonds. That progressively opens a band gap beginning at about 3 percent tension, according to the computations. The Rice team created a phase diagram to illustrate the relationship of the band gap to strain and temperature.

How carbyne is attached to electrodes also matters, Artyukhov said. “Different bond connectivity patterns can affect the metallic/dielectric state balance and shift the transition point, potentially to where it may not be accessible anymore,” he said. “So one has to be extremely careful about making the contacts.”

“Carbyne’s structure is a conundrum,” he said. “Until this paper, everybody was convinced it was single-triple, with a long bond then a short bond, caused by Peierls instability.” He said the realization that quantum vibrations may quench Peierls, together with the team’s earlier finding that tension can increase the band gap and make carbyne more insulating, prompted the new study.

“Other researchers considered the role of ZPV in Peierls-active systems, even carbyne itself, before we did,” Artyukhov said. “However, in all previous studies only two possible answers were being considered: either ‘carbyne is semiconducting’ or ‘carbyne is metallic,’ and the conclusion, whichever one, was viewed as sort of a timeless mathematical truth, a static ‘ultimate verdict.’ What we realized here is that you can use tension to dynamically go from one regime to the other, which makes it useful on a completely different level.”

Yakobson noted the findings should encourage more research into the formation of stable carbyne chains and may apply equally to other one-dimensional chains subject to Peierls distortions, including conducting polymers and charge/spin density-wave materials.

According to the news release the research was funded by the U.S. Air Force Office of Scientific Research, the Office of Naval Research Multidisciplinary University Research Initiative, and the Robert Welch Foundation. (I can’t recall another instance of the air force and the navy funding the same research.) In any event, here’s a link to and a citation for the paper,

Mechanically Induced Metal–Insulator Transition in Carbyne by Vasilii I. Artyukhov, Mingjie Liu, and Boris I. Yakobson. Nano Lett., Article ASAP DOI: 10.1021/nl5017317 Publication Date (Web): July 3, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

The researchers have provided an image to illustrate their work,

[downloaded from http://pubs.acs.org/doi/abs/10.1021/nl5017317]

[downloaded from http://pubs.acs.org/doi/abs/10.1021/nl5017317]

I’m not sure what the bird is doing in the image but it caught my fancy. There is another less whimsical illustration (you can see it in the  July 21, 2014 news item on ScienceDaily) and I believe the same caption can be used for the one I’ve chosen from the journal’s abstract page, “Carbyne chains of carbon atoms can be either metallic or semiconducting, according to first-principle calculations by scientists at Rice University. Stretching the chain dimerizes the atoms, opening a band gap between the pairs. Credit: Vasilii Artyukhov/Rice University.”

I last wrote about carbyne in an Oct. 9, 2013 posting where I noted that the material was unlikely to dethrone graphene as it didn’t appear to have properties useful in electronic applications. It seems the scientists have proved otherwise, at least in the laboratory.

Transmetalation, substituting one set of metal atoms for another set

Transmetalation bears a resemblance of sorts to transmutation. While the chemists from the University of Oregon aren’t turning lead to gold through an alchemical process they are switching out individual metal atoms, aluminum for indium. From a July 21, 2014 news item on ScienceDaily,

The yield so far is small, but chemists at the University of Oregon have developed a low-energy, solution-based mineral substitution process to make a precursor to transparent thin films that could find use in electronics and alternative energy devices.

A paper describing the approach is highlighted on the cover of the July 21 [2014] issue of the journal Inorganic Chemistry, which draws the most citations of research in the inorganic and nuclear chemistry fields. [emphasis mine] The paper was chosen by the American Chemical Society journal as an ACS Editor’s Choice for its potential scientific and broad public interest when it initially published online.

One observation unrelated to the research, the competition amongst universities seems to be heating up. While journals often tout their impact factor, it’s usually more discreetly than in what amounts to a citation in the second paragraph of the university news release, which originated the news item.

The July 21, 2014 University of Oregon news release (also on EurekAlert), describes the work in more detail,

The process described in the paper represents a new approach to transmetalation, in which individual atoms of one metal complex — a cluster in this case — are individually substituted in water. For this study, Maisha K. Kamunde-Devonish and Milton N. Jackson Jr., doctoral students in the Department of Chemistry and Biochemistry, replaced aluminum atoms with indium atoms.

The goal is to develop inorganic clusters as precursors that result in dense thin films with negligible defects, resulting in new functional materials and thin-film metal oxides. The latter would have wide application in a variety of electronic devices.

“Since the numbers of compounds that fit this bill is small, we are looking at transmetelation as a method for creating new precursors with new combinations of metals that would circumvent barriers to performance,” Kamunde-Devonish said.

Components in these devices now use deposition techniques that require a lot of energy in the form of pressure or temperature. Doing so in a more green way — reducing chemical waste during preparation — could reduce manufacturing costs and allow for larger-scale materials, she said.

“In essence,” said co-author Darren W. Johnson, a professor of chemistry, “we can prepare one type of nanoscale cluster compound, and then step-by-step substitute out the individual metal atoms to make new clusters that cannot be made by direct methods. The cluster we report in this paper serves as an excellent solution precursor to make very smooth thin films of amorphous aluminum indium oxide, a semiconductor material that can be used in transparent thin-film transistors.”

Transmetalation normally involves a reaction done in organic chemistry in which the substitution of metal ions generates new metal-carbon bonds for use in catalytic systems and to synthesize new metal complexes.

“This is a new way to use the process,” Kamunde-Devonish said, “Usually you take smaller building blocks and put them together to form a mix of your basic two or three metals. Instead of building a house from the ground up, we’re doing some remodeling. In everyday life that happens regularly, but in chemistry it doesn’t happen very often. We’ve been trying to make materials, compounds, anything that can be useful to improve the processes to make thin films that find application in a variety of electronic devices.”

The process, she added, could be turned into a toolbox that allows for precise substitutions to generate specifically desired properties. “Currently, we can only make small amounts,” she said, “but the fact that we can do this will allow us to get a fundamental understanding of how this process happens. The technology is possible already. It’s just a matter of determining if this type of material we’ve produced is the best for the process.”

Here’s a citation for and a link to the paper,

Transmetalation of Aqueous Inorganic Clusters: A Useful Route to the Synthesis of Heterometallic Aluminum and Indium Hydroxo—Aquo Clusters by Maisha K. Kamunde-Devonish, Milton N. Jackson, Jr., Zachary L. Mensinger, Lev N. Zakharov, and Darren W. Johnson. Inorg. Chem., 2014, 53 (14), pp 7101–7105 DOI: 10.1021/ic403121r Publication Date (Web): April 18, 2014

Copyright © 2014 American Chemical Society

This paper appears to be open access (I was able to view the HTML version when I clicked).

Better RRAM memory devices in the short term

Given my recent spate of posts about computing and the future of the chip (list to follow at the end of this post), this Rice University [Texas, US] research suggests that some improvements to current memory devices might be coming to the market in the near future. From a July 12, 2014 news item on Azonano,

Rice University’s breakthrough silicon oxide technology for high-density, next-generation computer memory is one step closer to mass production, thanks to a refinement that will allow manufacturers to fabricate devices at room temperature with conventional production methods.

A July 10, 2014 Rice University news release, which originated the news item, provides more detail,

Tour and colleagues began work on their breakthrough RRAM technology more than five years ago. The basic concept behind resistive memory devices is the insertion of a dielectric material — one that won’t normally conduct electricity — between two wires. When a sufficiently high voltage is applied across the wires, a narrow conduction path can be formed through the dielectric material.

The presence or absence of these conduction pathways can be used to represent the binary 1s and 0s of digital data. Research with a number of dielectric materials over the past decade has shown that such conduction pathways can be formed, broken and reformed thousands of times, which means RRAM can be used as the basis of rewritable random-access memory.

RRAM is under development worldwide and expected to supplant flash memory technology in the marketplace within a few years because it is faster than flash and can pack far more information into less space. For example, manufacturers have announced plans for RRAM prototype chips that will be capable of storing about one terabyte of data on a device the size of a postage stamp — more than 50 times the data density of current flash memory technology.

The key ingredient of Rice’s RRAM is its dielectric component, silicon oxide. Silicon is the most abundant element on Earth and the basic ingredient in conventional microchips. Microelectronics fabrication technologies based on silicon are widespread and easily understood, but until the 2010 discovery of conductive filament pathways in silicon oxide in Tour’s lab, the material wasn’t considered an option for RRAM.

Since then, Tour’s team has raced to further develop its RRAM and even used it for exotic new devices like transparent flexible memory chips. At the same time, the researchers also conducted countless tests to compare the performance of silicon oxide memories with competing dielectric RRAM technologies.

“Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory,” Tour said. “It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”

In the latest study, a team headed by lead author and Rice postdoctoral researcher Gunuk Wang showed that using a porous version of silicon oxide could dramatically improve Rice’s RRAM in several ways. First, the porous material reduced the forming voltage — the power needed to form conduction pathways — to less than two volts, a 13-fold improvement over the team’s previous best and a number that stacks up against competing RRAM technologies. In addition, the porous silicon oxide also allowed Tour’s team to eliminate the need for a “device edge structure.”

“That means we can take a sheet of porous silicon oxide and just drop down electrodes without having to fabricate edges,” Tour said. “When we made our initial announcement about silicon oxide in 2010, one of the first questions I got from industry was whether we could do this without fabricating edges. At the time we could not, but the change to porous silicon oxide finally allows us to do that.”

Wang said, “We also demonstrated that the porous silicon oxide material increased the endurance cycles more than 100 times as compared with previous nonporous silicon oxide memories. Finally, the porous silicon oxide material has a capacity of up to nine bits per cell that is highest number among oxide-based memories, and the multiple capacity is unaffected by high temperatures.”

Tour said the latest developments with porous silicon oxide — reduced forming voltage, elimination of need for edge fabrication, excellent endurance cycling and multi-bit capacity — are extremely appealing to memory companies.

“This is a major accomplishment, and we’ve already been approached by companies interested in licensing this new technology,” he said.

Here’s a link to and a citation for the paper,

Nanoporous Silicon Oxide Memory by Gunuk Wang, Yang Yang, Jae-Hwang Lee, Vera Abramova, Huilong Fei, Gedeng Ruan, Edwin L. Thomas, and James M. Tour. Nano Lett., Article ASAP DOI: 10.1021/nl501803s Publication Date (Web): July 3, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

As for my recent spate of posts on computers and chips, there’s a July 11, 2014 posting about IBM, a 7nm chip, and much more; a July 9, 2014 posting about Intel and its 14nm low-power chip processing and plans for a 10nm chip; and, finally, a June 26, 2014 posting about HP Labs and its plans for memristive-based computing and their project dubbed ‘The Machine’.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1″ or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

The age of the ‘nano-pixel’

As mentioned here before, ‘The Diamond Age: Or, A Young Lady’s Illustrated Primer’, a 1985 novel by Neal Stephenson featured in its opening chapter a flexible, bendable, rollable, newspaper screen. It’s one of those devices promised by ‘nano evangelists’ that never quite seems to come into existence. However, ‘hope springs eternally’ as they say and a team from the University of Oxford claims to be bringing us one step closer.

From a July 10, 2014 University of Oxford press release (also on EurekAlert but dated July 9, 2014 and on Azoanano as a July 10, 2014 news item),

A new discovery will make it possible to create pixels just a few hundred nanometres across that could pave the way for extremely high-resolution and low-energy thin, flexible displays for applications such as ‘smart’ glasses, synthetic retinas, and foldable screens.

A team led by Oxford University scientists explored the link between the electrical and optical properties of phase change materials (materials that can change from an amorphous to a crystalline state). They found that by sandwiching a seven nanometre thick layer of a phase change material (GST) between two layers of a transparent electrode they could use a tiny current to ‘draw’ images within the sandwich ‘stack’.

Here’s a series of images the researchers have created using this technology,

Still images drawn with the technology: at around 70 micrometres across each image is smaller than the width of a human hair.  Courtesy University of Oxford

Still images drawn with the technology: at around 70 micrometres across each image is smaller than the width of a human hair. Courtesy University of Oxford

The press release offers a technical description,

Initially still images were created using an atomic force microscope but the team went on to demonstrate that such tiny ‘stacks’ can be turned into prototype pixel-like devices. These ‘nano-pixels’ – just 300 by 300 nanometres in size – can be electrically switched ‘on and off’ at will, creating the coloured dots that would form the building blocks of an extremely high-resolution display technology.

‘We didn’t set out to invent a new kind of display,’ said Professor Harish Bhaskaran of Oxford University’s Department of Materials, who led the research. ‘We were exploring the relationship between the electrical and optical properties of phase change materials and then had the idea of creating this GST ‘sandwich’ made up of layers just a few nanometres thick. We found that not only were we able to create images in the stack but, to our surprise, thinner layers of GST actually gave us better contrast. We also discovered that altering the size of the bottom electrode layer enabled us to change the colour of the image.’

The layers of the GST sandwich are created using a sputtering technique where a target is bombarded with high energy particles so that atoms from the target are deposited onto another material as a thin film.

‘Because the layers that make up our devices can be deposited as thin films they can be incorporated into very thin flexible materials – we have already demonstrated that the technique works on flexible Mylar sheets around 200 nanometres thick,’ said Professor Bhaskaran. ‘This makes them potentially useful for ‘smart’ glasses, foldable screens, windshield displays, and even synthetic retinas that mimic the abilities of photoreceptor cells in the human eye.’

Peiman Hosseini of Oxford University’s Department of Materials, first author of the paper, said: ‘Our models are so good at predicting the experiment that we can tune our prototype ‘pixels’ to create any colour we want – including the primary colours needed for a display. One of the advantages of our design is that, unlike most conventional LCD screens, there would be no need to constantly refresh all pixels, you would only have to refresh those pixels that actually change (static pixels remain as they were). This means that any display based on this technology would have extremely low energy consumption.’

The research suggests that flexible paper-thin displays based on the technology could have the capacity to switch between a power-saving ‘colour e-reader mode’, and a backlit display capable of showing video. Such displays could be created using cheap materials and, because they would be solid-state, promise to be reliable and easy to manufacture. The tiny ‘nano-pixels’ make it ideal for applications, such as smart glasses, where an image would be projected at a larger size as, even enlarged, they would offer very high-resolution.

Professor David Wright of the Department of Engineering at the University of Exeter, co-author of the paper, said: ‘Along with many other researchers around the world we have been looking into the use of these GST materials for memory applications for many years, but no one before thought of combining their electrical and optical functionality to provide entirely new kinds of non-volatile, high-resolution, electronic colour displays – so our work is a real breakthrough.’

The phase change material used was the alloy Ge2Sb2Te5 (Germanium-Antimony-Tellurium or GST) sandwiched between electrode layers made of indium tin oxide (ITO).

I gather the researchers are looking for investors (from the press release),

Whilst the work is still in its early stages, realising its potential, the Oxford team has filed a patent on the discovery with the help of Isis Innovation, Oxford University’s technology commercialisation company. Isis is now discussing the displays with companies who are interested in assessing the technology, and with investors.

Here’s a link to and a citation for the paper,

An optoelectronic framework enabled by low-dimensional phase-change films by Peiman Hosseini, C. David Wright, & Harish Bhaskaran. Nature 511, 206–211 (10 July 2014) doi:10.1038/nature13487 Published online 09 July 2014

This paper is behind a paywall.

Intel to produce Panasonic SoCs (system-on-chips) using 14nm low-power process

A July 8, 2014 news item on Azonano describes a manufacturing agreement between Intel and Panasonic,

Intel Corporation today announced that it has entered into a manufacturing agreement with Panasonic Corporation’s System LSI Business Division. Intel’s custom foundry business will manufacture future Panasonic system-on-chips (SoCs) using Intel’s 14nm low-power manufacturing process.

Panasonic’s next-generation SoCs will target audio visual-based equipment markets, and will enable higher levels of performance, power and viewing experience for consumers.

A July 7, 2014 Intel press release, which originated the news item, reveals more details,

“Intel’s 14nm Tri-Gate process technology is very important to develop the next- generation SoCs,” said Yoshifumi Okamoto, director, Panasonic Corporation SLSI Business Division. “We will deliver highly improved performance and power advantages with next-generation SoCs by leveraging Intel’s 14nm Tri-Gate process technology through our collaboration.”

Intel’s leading-edge 14nm low-power process technology, which includes the second generation of Tri-Gate transistors, is optimized for low-power applications. This will enable Panasonic’s SoCs to achieve high levels of performance and functionality at lower power levels than was possible with planar transistors.

“We look forward to collaborating with the Panasonic SLSI Business Division,” said Sunit Rikhi, vice president and general manager, Intel Custom Foundry. “We will work hard to deliver the value of power-efficient performance of our 14nm LP process to Panasonic’s next-generation SoCs. This agreement with Panasonic is an important step in the buildup of Intel’s foundry business.”

Five other semiconductor companies have announced agreements with Intel’s custom foundry business, including Altera, Achronix Semiconductor, Tabula, Netronome and Microsemi.

Rick Merritt in a July 7, 2014 article for EE Times provides some insight,

“We are doing extremely well getting customers who can use our technology,” Sunit Rikhi, general manager of Intel’s foundry group, said in a talk at Semicon West, though he would not provide details. …

He suggested that the low-power variant of Intel’s 14nm process is relatively new. Intel uses a general-purpose 22nm process but supports multiple flavors of its 32nm process.

Intel expects to make 10nm chips without extreme ultraviolet (EUV) lithography, he said, reiterating comments from Intel’s Mark Bohr. …

This news provides an update of sorts to my October 21, 2010 posting,

Paul Otellini, Chief Executive Officer of Intel, just announced that the company will invest $6B to $8B for new and upgraded manufacturing facilities to produce 22 nanometre (nm) computer chips.

Now, almost our years later they’re talking about 10 nm chips. I wonder what 2018 will bring?

Memristors, memcapacitors, and meminductors for faster computers

While some call memristors a fourth fundamental component alongside resistors, capacitors, and inductors (as mentioned in my June 26, 2014 posting which featured an update of sorts on memristors [scroll down about 80% of the way]), others view memristors as members of an emerging periodic table of circuit elements (as per my April 7, 2010 posting).

It seems scientists, Fabio Traversa, and his colleagues fall into the ‘periodic table of circuit elements’ camp. From Traversa’s  June 27, 2014 posting on nanotechweb.org,

Memristors, memcapacitors and meminductors may retain information even without a power source. Several applications of these devices have already been proposed, yet arguably one of the most appealing is ‘memcomputing’ – a brain-inspired computing paradigm utilizing the ability of emergent nanoscale devices to store and process information on the same physical platform.

A multidisciplinary team of researchers from the Autonomous University of Barcelona in Spain, the University of California San Diego and the University of South Carolina in the US, and the Polytechnic of Turin in Italy, suggest a realization of “memcomputing” based on nanoscale memcapacitors. They propose and analyse a major advancement in using memcapacitive systems (capacitors with memory), as central elements for Very Large Scale Integration (VLSI) circuits capable of storing and processing information on the same physical platform. They name this architecture Dynamic Computing Random Access Memory (DCRAM).

Using the standard configuration of a Dynamic Random Access Memory (DRAM) where the capacitors have been substituted with solid-state based memcapacitive systems, they show the possibility of performing WRITE, READ and polymorphic logic operations by only applying modulated voltage pulses to the memory cells. Being based on memcapacitors, the DCRAM expands very little energy per operation. It is a realistic memcomputing machine that overcomes the von Neumann bottleneck and clearly exhibits intrinsic parallelism and functional polymorphism.

Here’s a link to and a citation for the paper,

Dynamic computing random access memory by F L Traversa, F Bonani, Y V Pershin, and M Di Ventra. Nanotechnology Volume 25 Number 28  doi:10.1088/0957-4484/25/28/285201 Published 27 June 2014

This paper is behind a paywall.

Memristor, memristor! What is happening? News from the University of Michigan and HP Laboratories

Professor Wei Lu (whose work on memristors has been mentioned here a few times [an April 15, 2010 posting and an April 19, 2012 posting]) has made a discovery about memristors with significant implications (from a June 25, 2014 news item on Azonano),

In work that unmasks some of the magic behind memristors and “resistive random access memory,” or RRAM—cutting-edge computer components that combine logic and memory functions—researchers have shown that the metal particles in memristors don’t stay put as previously thought.

The findings have broad implications for the semiconductor industry and beyond. They show, for the first time, exactly how some memristors remember.

A June 24, 2014 University of Michigan news release, which originated the news item, includes Lu’s perspective on this discovery and more details about it,

“Most people have thought you can’t move metal particles in a solid material,” said Wei Lu, associate professor of electrical and computer engineering at the University of Michigan. “In a liquid and gas, it’s mobile and people understand that, but in a solid we don’t expect this behavior. This is the first time it has been shown.”

Lu, who led the project, and colleagues at U-M and the Electronic Research Centre Jülich in Germany used transmission electron microscopes to watch and record what happens to the atoms in the metal layer of their memristor when they exposed it to an electric field. The metal layer was encased in the dielectric material silicon dioxide, which is commonly used in the semiconductor industry to help route electricity.

They observed the metal atoms becoming charged ions, clustering with up to thousands of others into metal nanoparticles, and then migrating and forming a bridge between the electrodes at the opposite ends of the dielectric material.

They demonstrated this process with several metals, including silver and platinum. And depending on the materials involved and the electric current, the bridge formed in different ways.

The bridge, also called a conducting filament, stays put after the electrical power is turned off in the device. So when researchers turn the power back on, the bridge is there as a smooth pathway for current to travel along. Further, the electric field can be used to change the shape and size of the filament, or break the filament altogether, which in turn regulates the resistance of the device, or how easy current can flow through it.

Computers built with memristors would encode information in these different resistance values, which is in turn based on a different arrangement of conducting filaments.

Memristor researchers like Lu and his colleagues had theorized that the metal atoms in memristors moved, but previous results had yielded different shaped filaments and so they thought they hadn’t nailed down the underlying process.

“We succeeded in resolving the puzzle of apparently contradicting observations and in offering a predictive model accounting for materials and conditions,” said Ilia Valov, principle investigator at the Electronic Materials Research Centre Jülich. “Also the fact that we observed particle movement driven by electrochemical forces within dielectric matrix is in itself a sensation.”

The implications for this work (from the news release),

The results could lead to a new approach to chip design—one that involves using fine-tuned electrical signals to lay out integrated circuits after they’re fabricated. And it could also advance memristor technology, which promises smaller, faster, cheaper chips and computers inspired by biological brains in that they could perform many tasks at the same time.

As is becoming more common these days (from the news release),

Lu is a co-founder of Crossbar Inc., a Santa Clara, Calif.-based startup working to commercialize RRAM. Crossbar has just completed a $25 million Series C funding round.

Here’s a link to and a citation for the paper,

Electrochemical dynamics of nanoscale metallic inclusions in dielectrics by Yuchao Yang, Peng Gao, Linze Li, Xiaoqing Pan, Stefan Tappertzhofen, ShinHyun Choi, Rainer Waser, Ilia Valov, & Wei D. Lu. Nature Communications 5, Article number: 4232 doi:10.1038/ncomms5232 Published 23 June 2014

This paper is behind a paywall.

The other party instrumental in the development and, they hope, the commercialization of memristors is HP (Hewlett Packard) Laboratories (HP Labs). Anyone familiar with this blog will likely know I have frequently covered the topic starting with an essay explaining the basics on my Nanotech Mysteries wiki (or you can check this more extensive and more recently updated entry on Wikipedia) and with subsequent entries here over the years. The most recent entry is a Jan. 9, 2014 posting which featured the then latest information on the HP Labs memristor situation (scroll down about 50% of the way). This new information is more in the nature of a new revelation of details rather than an update on its status. Sebastian Anthony’s June 11, 2014 article for extremetech.com lays out the situation plainly (Note: Links have been removed),

HP, one of the original 800lb Silicon Valley gorillas that has seen much happier days, is staking everything on a brand new computer architecture that it calls… The Machine. Judging by an early report from Bloomberg Businessweek, up to 75% of HP’s once fairly illustrious R&D division — HP Labs – are working on The Machine. As you would expect, details of what will actually make The Machine a unique proposition are hard to come by, but it sounds like HP’s groundbreaking work on memristors (pictured top) and silicon photonics will play a key role.

First things first, we’re probably not talking about a consumer computing architecture here, though it’s possible that technologies commercialized by The Machine will percolate down to desktops and laptops. Basically, HP used to be a huge player in the workstation and server markets, with its own operating system and hardware architecture, much like Sun. Over the last 10 years though, Intel’s x86 architecture has rapidly taken over, to the point where HP (and Dell and IBM) are essentially just OEM resellers of commodity x86 servers. This has driven down enterprise profit margins — and when combined with its huge stake in the diminishing PC market, you can see why HP is rather nervous about the future. The Machine, and IBM’s OpenPower initiative, are both attempts to get out from underneath Intel’s x86 monopoly.

While exact details are hard to come by, it seems The Machine is predicated on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements. HP is working on two technologies that could solve both problems: Memristors could replace RAM and long-term flash storage, and silicon photonics could provide faster on- and off-motherboard buses. Memristors essentially combine the benefits of DRAM and flash storage in a single, hyper-fast, super-dense package. Silicon photonics is all about reducing optical transmission and reception to a scale that can be integrated into silicon chips (moving from electrical to optical would allow for much higher data rates and lower power consumption). Both technologies can be built using conventional fabrication techniques.

In a June 11, 2014 article by Ashlee Vance for Bloomberg Business Newsweek, the company’s CTO (Chief Technical Officer), Martin Fink provides new details,

That’s what they’re calling it at HP Labs: “the Machine.” It’s basically a brand-new type of computer architecture that HP’s engineers say will serve as a replacement for today’s designs, with a new operating system, a different type of memory, and superfast data transfer. The company says it will bring the Machine to market within the next few years or fall on its face trying. “We think we have no choice,” says Martin Fink, the chief technology officer and head of HP Labs, who is expected to unveil HP’s plans at a conference Wednesday [June 11, 2014].

In my Jan. 9, 2014 posting there’s a quote from Martin Fink stating that 2018 would be earliest date for the company’s StoreServ arrays to be packed with 100TB Memristor drives (the Machine?). The company later clarified the comment by noting that it’s very difficult to set dates for new technology arrivals.

Vance shares what could be a stirring ‘origins’ story of sorts, provided the Machine is successful,

The Machine started to take shape two years ago, after Fink was named director of HP Labs. Assessing the company’s projects, he says, made it clear that HP was developing the needed components to create a better computing system. Among its research projects: a new form of memory known as memristors; and silicon photonics, the transfer of data inside a computer using light instead of copper wires. And its researchers have worked on operating systems including Windows, Linux, HP-UX, Tru64, and NonStop.

Fink and his colleagues decided to pitch HP Chief Executive Officer Meg Whitman on the idea of assembling all this technology to form the Machine. During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

Here is the memristor making an appearance in Vance’s article,

HP’s bet is the memristor, a nanoscale chip that Labs researchers must build and handle in full anticontamination clean-room suits. At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.

New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store. …

Peter Bright in his June 11, 2014 article for Ars Technica opens his article with a controversial statement (Note: Links have been removed),

In 2008, scientists at HP invented a fourth fundamental component to join the resistor, capacitor, and inductor: the memristor. [emphasis mine] Theorized back in 1971, memristors showed promise in computing as they can be used to both build logic gates, the building blocks of processors, and also act as long-term storage.

Whether or not the memristor is a fourth fundamental component has been a matter of some debate as you can see in this Memristor entry (section on Memristor definition and criticism) on Wikipedia.

Bright goes on to provide a 2016 delivery date for some type of memristor-based product and additional technical insight about the Machine,

… By 2016, the company plans to have memristor-based DIMMs, which will combine the high storage densities of hard disks with the high performance of traditional DRAM.

John Sontag, vice president of HP Systems Research, said that The Machine would use “electrons for processing, photons for communication, and ions for storage.” The electrons are found in conventional silicon processors, and the ions are found in the memristors. The photons are because the company wants to use optical interconnects in the system, built using silicon photonics technology. With silicon photonics, photons are generated on, and travel through, “circuits” etched onto silicon chips, enabling conventional chip manufacturing to construct optical parts. This allows the parts of the system using photons to be tightly integrated with the parts using electrons.

The memristor story has proved to be even more fascinating than I thought in 2008 and I was already as fascinated as could be, or so I thought.

Ferroelectric switching in the lung, heart, and arteries

A June 23, 2014 University of Washington (state) news release (also on EurekAlert) describes how the human body (and other biological tissue) is capable of generating ferroelectricity,

University of Washington researchers have shown that a favorable electrical property is present in a type of protein found in organs that repeatedly stretch and retract, such as the lungs, heart and arteries. These findings are the first that clearly track this phenomenon, called ferroelectricity, occurring at the molecular level in biological tissues.

The news release gives a brief description of ferroelectricity and describes the research team’s latest work with biological tissues,

Ferroelectricity is a response to an electric field in which a molecule switches from having a positive to a negative charge. This switching process in synthetic materials serves as a way to power computer memory chips, display screens and sensors. This property only recently has been discovered in animal tissues and researchers think it may help build and support healthy connective tissues in mammals.

A research team led by Li first discovered ferroelectric properties in biological tissues in 2012, then in 2013 found that glucose can suppress this property in the body’s connective tissues, wherever the protein elastin is present. But while ferroelectricity is a proven entity in synthetic materials and has long been thought to be important in biological functions, its actual existence in biology hasn’t been firmly established.

This study proves that ferroelectric switching happens in the biological protein elastin. When the researchers looked at the base structures within the protein, they saw similar behavior to the unit cells of solid-state materials, where ferroelectricity is well understood.

“When we looked at the smallest structural unit of the biological tissue and how it was organized into a larger protein fiber, we then were able to see similarities to the classic ferroelectric model found in solids,” Li said.

The researchers wanted to establish a more concrete, precise way of verifying ferroelectricity in biological tissues. They used small samples of elastin taken from a pig’s aorta and poled the tissues using an electric field at high temperatures. They then measured the current with the poling field removed and found that the current switched direction when the poling electric field was switched, a sign of ferroelectricity.

They did the same thing at room temperature using a laser as the heat source, and the current also switched directions.

Then, the researchers tested for this behavior on the smallest-possible unit of elastin, called tropoelastin, and again observed the phenomenon. They concluded that this switching property is “intrinsic” to the molecular make-up of elastin.

The next step is to understand the biological and physiological significance of this property, Li said. One hypothesis is that if ferroelectricity helps elastin stay flexible and functional in the body, a lack of it could directly affect the hardening of arteries.

“We may be able to use this as a very sensitive technique to detect the initiation of the hardening process at a very early stage when no other imaging technique will be able to see it,” Li said.

The team also is looking at whether this property plays a role in normal biological functions, perhaps in regulating the growth of tissue.

Co-authors are Pradeep Sharma at the University of Houston, Yanhang Zhang at Boston University, and collaborators at Nanjing University and the Chinese Academy of Sciences.

Here’s a link to and a citation for the research paper,

Ferroelectric switching of elastin by Yuanming Liu, Hong-Ling Cai, Matthew Zelisko, Yunjie Wang, Jinglan Sun, Fei Yan, Feiyue Ma, Peiqi Wang, Qian Nataly Chen, Hairong Zheng, Xiangjian Meng, Pradeep Sharma, Yanhang Zhang, and Jiangyu Li. Proceedings of the National Academy of Sciences (PNAS) doi: 10.1073/pnas.1402909111

This paper is behind a paywall.

I think this is a new practice. There is a paragraph on the significance of this work (follow the link to the paper),

Ferroelectricity has long been speculated to have important biological functions, although its very existence in biology has never been firmly established. Here, we present, to our knowledge, the first macroscopic observation of ferroelectric switching in a biological system, and we elucidate the origin and mechanism underpinning ferroelectric switching of elastin. It is discovered that the polarization in elastin is intrinsic at the monomer level, analogous to the unit cell level polarization in classical perovskite ferroelectrics. Our findings settle a long-standing question on ferroelectric switching in biology and establish ferroelectricity as an important biophysical property of proteins. We believe this is a critical first step toward resolving its physiological significance and pathological implications.

Cardiac pacemakers: Korea’s in vivo demonstration of a self-powered and UK’s breath-based approach

As i best I can determine ,the last mention of a self-powered pacemaker and the like on this blog was in a Nov. 5, 2012 posting (Developing self-powered batteries for pacemakers). This latest news from The Korea Advanced Institute of Science and Technology (KAIST) is, I believe, the first time that such a device has been successfully tested in vivo. From a June 23, 2014 news item on ScienceDaily,

As the number of pacemakers implanted each year reaches into the millions worldwide, improving the lifespan of pacemaker batteries has been of great concern for developers and manufacturers. Currently, pacemaker batteries last seven years on average, requiring frequent replacements, which may pose patients to a potential risk involved in medical procedures.

A research team from the Korea Advanced Institute of Science and Technology (KAIST), headed by Professor Keon Jae Lee of the Department of Materials Science and Engineering at KAIST and Professor Boyoung Joung, M.D. of the Division of Cardiology at Severance Hospital of Yonsei University, has developed a self-powered artificial cardiac pacemaker that is operated semi-permanently by a flexible piezoelectric nanogenerator.

A June 23, 2014 KAIST news release on EurekAlert, which originated the news item, provides more details,

The artificial cardiac pacemaker is widely acknowledged as medical equipment that is integrated into the human body to regulate the heartbeats through electrical stimulation to contract the cardiac muscles of people who suffer from arrhythmia. However, repeated surgeries to replace pacemaker batteries have exposed elderly patients to health risks such as infections or severe bleeding during operations.

The team’s newly designed flexible piezoelectric nanogenerator directly stimulated a living rat’s heart using electrical energy converted from the small body movements of the rat. This technology could facilitate the use of self-powered flexible energy harvesters, not only prolonging the lifetime of cardiac pacemakers but also realizing real-time heart monitoring.

The research team fabricated high-performance flexible nanogenerators utilizing a bulk single-crystal PMN-PT thin film (iBULe Photonics). The harvested energy reached up to 8.2 V and 0.22 mA by bending and pushing motions, which were high enough values to directly stimulate the rat’s heart.

Professor Keon Jae Lee said:

“For clinical purposes, the current achievement will benefit the development of self-powered cardiac pacemakers as well as prevent heart attacks via the real-time diagnosis of heart arrhythmia. In addition, the flexible piezoelectric nanogenerator could also be utilized as an electrical source for various implantable medical devices.”

This image illustrating a self-powered nanogenerator for a cardiac pacemaker has been provided by KAIST,

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester. Credit: KAIST

This picture shows that a self-powered cardiac pacemaker is enabled by a flexible piezoelectric energy harvester.
Credit: KAIST

Here’s a link to and a citation for the paper,

Self-Powered Cardiac Pacemaker Enabled by Flexible Single Crystalline PMN-PT Piezoelectric Energy Harvester by Geon-Tae Hwang, Hyewon Park, Jeong-Ho Lee, SeKwon Oh, Kwi-Il Park, Myunghwan Byun, Hyelim Park, Gun Ahn, Chang Kyu Jeong, Kwangsoo No, HyukSang Kwon, Sang-Goo Lee, Boyoung Joung, and Keon Jae Lee. Advanced Materials DOI: 10.1002/adma.201400562
Article first published online: 17 APR 2014

© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

There was a May 15, 2014 KAIST news release on EurekAlert announcing this same piece of research but from a technical perspective,

The energy efficiency of KAIST’s piezoelectric nanogenerator has increased by almost 40 times, one step closer toward the commercialization of flexible energy harvesters that can supply power infinitely to wearable, implantable electronic devices

NANOGENERATORS are innovative self-powered energy harvesters that convert kinetic energy created from vibrational and mechanical sources into electrical power, removing the need of external circuits or batteries for electronic devices. This innovation is vital in realizing sustainable energy generation in isolated, inaccessible, or indoor environments and even in the human body.

Nanogenerators, a flexible and lightweight energy harvester on a plastic substrate, can scavenge energy from the extremely tiny movements of natural resources and human body such as wind, water flow, heartbeats, and diaphragm and respiration activities to generate electrical signals. The generators are not only self-powered, flexible devices but also can provide permanent power sources to implantable biomedical devices, including cardiac pacemakers and deep brain stimulators.

However, poor energy efficiency and a complex fabrication process have posed challenges to the commercialization of nanogenerators. Keon Jae Lee, Associate Professor of Materials Science and Engineering at KAIST, and his colleagues have recently proposed a solution by developing a robust technique to transfer a high-quality piezoelectric thin film from bulk sapphire substrates to plastic substrates using laser lift-off (LLO).

Applying the inorganic-based laser lift-off (LLO) process, the research team produced a large-area PZT thin film nanogenerators on flexible substrates (2 cm x 2 cm).

“We were able to convert a high-output performance of ~250 V from the slight mechanical deformation of a single thin plastic substrate. Such output power is just enough to turn on 100 LED lights,” Keon Jae Lee explained.

The self-powered nanogenerators can also work with finger and foot motions. For example, under the irregular and slight bending motions of a human finger, the measured current signals had a high electric power of ~8.7 μA. In addition, the piezoelectric nanogenerator has world-record power conversion efficiency, almost 40 times higher than previously reported similar research results, solving the drawbacks related to the fabrication complexity and low energy efficiency.

Lee further commented,

“Building on this concept, it is highly expected that tiny mechanical motions, including human body movements of muscle contraction and relaxation, can be readily converted into electrical energy and, furthermore, acted as eternal power sources.”

The research team is currently studying a method to build three-dimensional stacking of flexible piezoelectric thin films to enhance output power, as well as conducting a clinical experiment with a flexible nanogenerator.

In addition to the 2012 posting I mentioned earlier, there was also this July 12, 2010 posting which described research on harvesting biomechanical movement ( heart beat, blood flow, muscle stretching, or even irregular vibration) at the Georgia (US) Institute of Technology where the lead researcher observed,

…  Wang [Professor Zhong Lin Wang at Georgia Tech] tells Nanowerk. “However, the applications of the nanogenerators under in vivo and in vitro environments are distinct. Some crucial problems need to be addressed before using these devices in the human body, such as biocompatibility and toxicity.”

Bravo to the KAIST researchers for getting this research to the in vivo testing stage.

Meanwhile at the University of Bristol and at the University of Bath, researchers have received funding for a new approach to cardiac pacemakers, designed them with the breath in mind. From a June 24, 2014 news item on Azonano,

Pacemaker research from the Universities of Bath and Bristol could revolutionise the lives of over 750,000 people who live with heart failure in the UK.

The British Heart Foundation (BHF) is awarding funding to researchers developing a new type of heart pacemaker that modulates its pulses to match breathing rates.

A June 23, 2014 University of Bristol press release, which originated the news item, provides some context,

During 2012-13 in England, more than 40,000 patients had a pacemaker fitted.

Currently, the pulses from pacemakers are set at a constant rate when fitted which doesn’t replicate the natural beating of the human heart.

The normal healthy variation in heart rate during breathing is lost in cardiovascular disease and is an indicator for sleep apnoea, cardiac arrhythmia, hypertension, heart failure and sudden cardiac death.

The device is then briefly described (from the press release),

The novel device being developed by scientists at the Universities of Bath and Bristol uses synthetic neural technology to restore this natural variation of heart rate with lung inflation, and is targeted towards patients with heart failure.

The device works by saving the heart energy, improving its pumping efficiency and enhancing blood flow to the heart muscle itself.  Pre-clinical trials suggest the device gives a 25 per cent increase in the pumping ability, which is expected to extend the life of patients with heart failure.

One aim of the project is to miniaturise the pacemaker device to the size of a postage stamp and to develop an implant that could be used in humans within five years.

Dr Alain Nogaret, Senior Lecturer in Physics at the University of Bath, explained“This is a multidisciplinary project with strong translational value.  By combining fundamental science and nanotechnology we will be able to deliver a unique treatment for heart failure which is not currently addressed by mainstream cardiac rhythm management devices.”

The research team has already patented the technology and is working with NHS consultants at the Bristol Heart Institute, the University of California at San Diego and the University of Auckland. [emphasis mine]

Professor Julian Paton, from the University of Bristol, added: “We’ve known for almost 80 years that the heart beat is modulated by breathing but we have never fully understood the benefits this brings. The generous new funding from the BHF will allow us to reinstate this natural occurring synchrony between heart rate and breathing and understand how it brings therapy to hearts that are failing.”

Professor Jeremy Pearson, Associate Medical Director at the BHF, said: “This study is a novel and exciting first step towards a new generation of smarter pacemakers. More and more people are living with heart failure so our funding in this area is crucial. The work from this innovative research team could have a real impact on heart failure patients’ lives in the future.”

Given some current events (‘Tesla opens up its patents’, Mike Masnick’s June 12, 2014 posting on Techdirt), I wonder what the situation will be vis à vis patents by the time this device gets to market.